00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2405 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3670 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.141 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.172 Using shallow fetch with depth 1 00:00:00.172 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.172 > git --version # timeout=10 00:00:00.189 > git --version # 'git version 2.39.2' 00:00:00.189 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.734 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.748 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.760 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.761 > git config core.sparsecheckout # timeout=10 00:00:05.772 > git read-tree -mu HEAD # timeout=10 00:00:05.787 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.811 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.811 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.914 [Pipeline] Start of Pipeline 00:00:05.927 [Pipeline] library 00:00:05.929 Loading library shm_lib@master 00:00:05.929 Library shm_lib@master is cached. Copying from home. 00:00:05.947 [Pipeline] node 00:00:05.960 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.962 [Pipeline] { 00:00:05.973 [Pipeline] catchError 00:00:05.975 [Pipeline] { 00:00:05.986 [Pipeline] wrap 00:00:05.995 [Pipeline] { 00:00:06.001 [Pipeline] stage 00:00:06.002 [Pipeline] { (Prologue) 00:00:06.019 [Pipeline] echo 00:00:06.020 Node: VM-host-SM38 00:00:06.024 [Pipeline] cleanWs 00:00:06.035 [WS-CLEANUP] Deleting project workspace... 00:00:06.035 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.042 [WS-CLEANUP] done 00:00:06.233 [Pipeline] setCustomBuildProperty 00:00:06.355 [Pipeline] httpRequest 00:00:06.923 [Pipeline] echo 00:00:06.924 Sorcerer 10.211.164.20 is alive 00:00:06.932 [Pipeline] retry 00:00:06.934 [Pipeline] { 00:00:06.944 [Pipeline] httpRequest 00:00:06.949 HttpMethod: GET 00:00:06.950 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.950 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.966 Response Code: HTTP/1.1 200 OK 00:00:06.967 Success: Status code 200 is in the accepted range: 200,404 00:00:06.968 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.248 [Pipeline] } 00:00:09.263 [Pipeline] // retry 00:00:09.270 [Pipeline] sh 00:00:09.553 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.570 [Pipeline] httpRequest 00:00:09.920 [Pipeline] echo 00:00:09.921 Sorcerer 10.211.164.20 is alive 00:00:09.930 [Pipeline] retry 00:00:09.933 [Pipeline] { 00:00:09.951 [Pipeline] httpRequest 00:00:09.956 HttpMethod: GET 00:00:09.957 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:09.957 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:09.977 Response Code: HTTP/1.1 200 OK 00:00:09.977 Success: Status code 200 is in the accepted range: 200,404 00:00:09.978 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:25.865 [Pipeline] } 00:01:25.881 [Pipeline] // retry 00:01:25.889 [Pipeline] sh 00:01:26.175 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:29.528 [Pipeline] sh 00:01:29.815 + git -C spdk log --oneline -n5 00:01:29.815 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:29.815 5592070b3 doc: update nvmf_tracing.md 00:01:29.815 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:29.815 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:29.815 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:01:29.831 [Pipeline] withCredentials 00:01:29.841 > git --version # timeout=10 00:01:29.853 > git --version # 'git version 2.39.2' 00:01:29.871 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:29.872 [Pipeline] { 00:01:29.880 [Pipeline] retry 00:01:29.882 [Pipeline] { 00:01:29.897 [Pipeline] sh 00:01:30.180 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:30.194 [Pipeline] } 00:01:30.212 [Pipeline] // retry 00:01:30.217 [Pipeline] } 00:01:30.234 [Pipeline] // withCredentials 00:01:30.244 [Pipeline] httpRequest 00:01:30.583 [Pipeline] echo 00:01:30.585 Sorcerer 10.211.164.20 is alive 00:01:30.594 [Pipeline] retry 00:01:30.596 [Pipeline] { 00:01:30.611 [Pipeline] httpRequest 00:01:30.617 HttpMethod: GET 00:01:30.617 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:30.618 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:30.619 Response Code: HTTP/1.1 200 OK 00:01:30.619 Success: Status code 200 is in the accepted range: 200,404 00:01:30.620 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.902 [Pipeline] } 00:01:43.913 [Pipeline] // retry 00:01:43.918 [Pipeline] sh 00:01:44.198 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:45.592 [Pipeline] sh 00:01:45.875 + git -C dpdk log --oneline -n5 00:01:45.875 caf0f5d395 version: 22.11.4 00:01:45.875 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:45.875 dc9c799c7d vhost: fix missing spinlock unlock 00:01:45.875 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:45.875 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:45.894 [Pipeline] writeFile 00:01:45.909 [Pipeline] sh 00:01:46.194 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:46.209 [Pipeline] sh 00:01:46.494 + cat autorun-spdk.conf 00:01:46.494 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.494 SPDK_TEST_NVME=1 00:01:46.494 SPDK_TEST_FTL=1 00:01:46.494 SPDK_TEST_ISAL=1 00:01:46.494 SPDK_RUN_ASAN=1 00:01:46.494 SPDK_RUN_UBSAN=1 00:01:46.494 SPDK_TEST_XNVME=1 00:01:46.494 SPDK_TEST_NVME_FDP=1 00:01:46.494 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:46.494 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:46.494 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.503 RUN_NIGHTLY=1 00:01:46.505 [Pipeline] } 00:01:46.517 [Pipeline] // stage 00:01:46.533 [Pipeline] stage 00:01:46.535 [Pipeline] { (Run VM) 00:01:46.548 [Pipeline] sh 00:01:46.831 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:46.831 + echo 'Start stage prepare_nvme.sh' 00:01:46.831 Start stage prepare_nvme.sh 00:01:46.831 + [[ -n 1 ]] 00:01:46.831 + disk_prefix=ex1 00:01:46.831 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:46.831 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:46.831 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:46.831 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.831 ++ SPDK_TEST_NVME=1 00:01:46.831 ++ SPDK_TEST_FTL=1 00:01:46.831 ++ SPDK_TEST_ISAL=1 00:01:46.831 ++ SPDK_RUN_ASAN=1 00:01:46.831 ++ SPDK_RUN_UBSAN=1 00:01:46.831 ++ SPDK_TEST_XNVME=1 00:01:46.831 ++ SPDK_TEST_NVME_FDP=1 00:01:46.831 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:46.831 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:46.831 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.831 ++ RUN_NIGHTLY=1 00:01:46.831 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:46.831 + nvme_files=() 00:01:46.831 + declare -A nvme_files 00:01:46.831 + backend_dir=/var/lib/libvirt/images/backends 00:01:46.831 + nvme_files['nvme.img']=5G 00:01:46.831 + nvme_files['nvme-cmb.img']=5G 00:01:46.831 + nvme_files['nvme-multi0.img']=4G 00:01:46.831 + nvme_files['nvme-multi1.img']=4G 00:01:46.831 + nvme_files['nvme-multi2.img']=4G 00:01:46.831 + nvme_files['nvme-openstack.img']=8G 00:01:46.831 + nvme_files['nvme-zns.img']=5G 00:01:46.831 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:46.831 + (( SPDK_TEST_FTL == 1 )) 00:01:46.831 + nvme_files["nvme-ftl.img"]=6G 00:01:46.831 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:46.831 + nvme_files["nvme-fdp.img"]=1G 00:01:46.831 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:46.832 + for nvme in "${!nvme_files[@]}" 00:01:46.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:46.832 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.832 + for nvme in "${!nvme_files[@]}" 00:01:46.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:47.405 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:47.405 + for nvme in "${!nvme_files[@]}" 00:01:47.405 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:47.405 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.405 + for nvme in "${!nvme_files[@]}" 00:01:47.405 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:47.667 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:47.667 + for nvme in "${!nvme_files[@]}" 00:01:47.667 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:47.928 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.928 + for nvme in "${!nvme_files[@]}" 00:01:47.928 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:47.928 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.928 + for nvme in "${!nvme_files[@]}" 00:01:47.928 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:48.186 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.187 + for nvme in "${!nvme_files[@]}" 00:01:48.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:48.187 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:48.187 + for nvme in "${!nvme_files[@]}" 00:01:48.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:48.768 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.768 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:48.768 + echo 'End stage prepare_nvme.sh' 00:01:48.768 End stage prepare_nvme.sh 00:01:48.800 [Pipeline] sh 00:01:49.078 + DISTRO=fedora39 00:01:49.078 + CPUS=10 00:01:49.078 + RAM=12288 00:01:49.078 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:49.078 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:49.078 00:01:49.078 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:49.078 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:49.078 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:49.078 HELP=0 00:01:49.078 DRY_RUN=0 00:01:49.078 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:49.078 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:49.078 NVME_AUTO_CREATE=0 00:01:49.078 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:49.078 NVME_CMB=,,,, 00:01:49.078 NVME_PMR=,,,, 00:01:49.078 NVME_ZNS=,,,, 00:01:49.078 NVME_MS=true,,,, 00:01:49.078 NVME_FDP=,,,on, 00:01:49.078 SPDK_VAGRANT_DISTRO=fedora39 00:01:49.078 SPDK_VAGRANT_VMCPU=10 00:01:49.078 SPDK_VAGRANT_VMRAM=12288 00:01:49.078 SPDK_VAGRANT_PROVIDER=libvirt 00:01:49.078 SPDK_VAGRANT_HTTP_PROXY= 00:01:49.078 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:49.078 SPDK_OPENSTACK_NETWORK=0 00:01:49.078 VAGRANT_PACKAGE_BOX=0 00:01:49.078 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:49.078 FORCE_DISTRO=true 00:01:49.078 VAGRANT_BOX_VERSION= 00:01:49.078 EXTRA_VAGRANTFILES= 00:01:49.078 NIC_MODEL=e1000 00:01:49.078 00:01:49.078 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:49.078 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:51.622 Bringing machine 'default' up with 'libvirt' provider... 00:01:52.191 ==> default: Creating image (snapshot of base box volume). 00:01:52.191 ==> default: Creating domain with the following settings... 00:01:52.191 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732664139_cf36ac07ad9be7744945 00:01:52.191 ==> default: -- Domain type: kvm 00:01:52.191 ==> default: -- Cpus: 10 00:01:52.191 ==> default: -- Feature: acpi 00:01:52.191 ==> default: -- Feature: apic 00:01:52.191 ==> default: -- Feature: pae 00:01:52.191 ==> default: -- Memory: 12288M 00:01:52.191 ==> default: -- Memory Backing: hugepages: 00:01:52.191 ==> default: -- Management MAC: 00:01:52.191 ==> default: -- Loader: 00:01:52.191 ==> default: -- Nvram: 00:01:52.191 ==> default: -- Base box: spdk/fedora39 00:01:52.191 ==> default: -- Storage pool: default 00:01:52.191 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732664139_cf36ac07ad9be7744945.img (20G) 00:01:52.191 ==> default: -- Volume Cache: default 00:01:52.191 ==> default: -- Kernel: 00:01:52.191 ==> default: -- Initrd: 00:01:52.191 ==> default: -- Graphics Type: vnc 00:01:52.191 ==> default: -- Graphics Port: -1 00:01:52.191 ==> default: -- Graphics IP: 127.0.0.1 00:01:52.191 ==> default: -- Graphics Password: Not defined 00:01:52.191 ==> default: -- Video Type: cirrus 00:01:52.191 ==> default: -- Video VRAM: 9216 00:01:52.191 ==> default: -- Sound Type: 00:01:52.191 ==> default: -- Keymap: en-us 00:01:52.191 ==> default: -- TPM Path: 00:01:52.191 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:52.191 ==> default: -- Command line args: 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:52.191 ==> default: -> value=-drive, 00:01:52.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:52.191 ==> default: -> value=-drive, 00:01:52.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:52.191 ==> default: -> value=-drive, 00:01:52.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.191 ==> default: -> value=-drive, 00:01:52.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.191 ==> default: -> value=-drive, 00:01:52.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:52.191 ==> default: -> value=-drive, 00:01:52.191 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:52.191 ==> default: -> value=-device, 00:01:52.191 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:52.191 ==> default: Creating shared folders metadata... 00:01:52.191 ==> default: Starting domain. 00:01:54.103 ==> default: Waiting for domain to get an IP address... 00:02:12.246 ==> default: Waiting for SSH to become available... 00:02:12.246 ==> default: Configuring and enabling network interfaces... 00:02:16.448 default: SSH address: 192.168.121.81:22 00:02:16.448 default: SSH username: vagrant 00:02:16.448 default: SSH auth method: private key 00:02:17.832 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:26.013 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:31.297 ==> default: Mounting SSHFS shared folder... 00:02:32.237 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:32.237 ==> default: Checking Mount.. 00:02:33.624 ==> default: Folder Successfully Mounted! 00:02:33.624 00:02:33.624 SUCCESS! 00:02:33.624 00:02:33.624 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:33.624 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:33.624 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:33.624 00:02:33.634 [Pipeline] } 00:02:33.649 [Pipeline] // stage 00:02:33.662 [Pipeline] dir 00:02:33.663 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:33.671 [Pipeline] { 00:02:33.684 [Pipeline] catchError 00:02:33.686 [Pipeline] { 00:02:33.696 [Pipeline] sh 00:02:33.985 + vagrant ssh-config --host vagrant 00:02:33.986 + sed -ne '/^Host/,$p' 00:02:33.986 + tee ssh_conf 00:02:36.528 Host vagrant 00:02:36.528 HostName 192.168.121.81 00:02:36.528 User vagrant 00:02:36.528 Port 22 00:02:36.528 UserKnownHostsFile /dev/null 00:02:36.528 StrictHostKeyChecking no 00:02:36.528 PasswordAuthentication no 00:02:36.528 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:36.528 IdentitiesOnly yes 00:02:36.528 LogLevel FATAL 00:02:36.528 ForwardAgent yes 00:02:36.528 ForwardX11 yes 00:02:36.528 00:02:36.541 [Pipeline] withEnv 00:02:36.543 [Pipeline] { 00:02:36.554 [Pipeline] sh 00:02:36.834 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:36.834 source /etc/os-release 00:02:36.834 [[ -e /image.version ]] && img=$(< /image.version) 00:02:36.834 # Minimal, systemd-like check. 00:02:36.834 if [[ -e /.dockerenv ]]; then 00:02:36.834 # Clear garbage from the node'\''s name: 00:02:36.834 # agt-er_autotest_547-896 -> autotest_547-896 00:02:36.834 # $HOSTNAME is the actual container id 00:02:36.834 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:36.834 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:36.834 # We can assume this is a mount from a host where container is running, 00:02:36.835 # so fetch its hostname to easily identify the target swarm worker. 00:02:36.835 container="$(< /etc/hostname) ($agent)" 00:02:36.835 else 00:02:36.835 # Fallback 00:02:36.835 container=$agent 00:02:36.835 fi 00:02:36.835 fi 00:02:36.835 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:36.835 ' 00:02:36.847 [Pipeline] } 00:02:36.863 [Pipeline] // withEnv 00:02:36.871 [Pipeline] setCustomBuildProperty 00:02:36.884 [Pipeline] stage 00:02:36.886 [Pipeline] { (Tests) 00:02:36.902 [Pipeline] sh 00:02:37.195 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:37.206 [Pipeline] sh 00:02:37.540 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:37.608 [Pipeline] timeout 00:02:37.609 Timeout set to expire in 50 min 00:02:37.611 [Pipeline] { 00:02:37.624 [Pipeline] sh 00:02:37.902 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:38.467 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:38.477 [Pipeline] sh 00:02:38.756 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:39.027 [Pipeline] sh 00:02:39.301 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:39.313 [Pipeline] sh 00:02:39.592 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:39.592 ++ readlink -f spdk_repo 00:02:39.592 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:39.592 + [[ -n /home/vagrant/spdk_repo ]] 00:02:39.592 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:39.592 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:39.592 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:39.592 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:39.592 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:39.592 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:39.592 + cd /home/vagrant/spdk_repo 00:02:39.592 + source /etc/os-release 00:02:39.592 ++ NAME='Fedora Linux' 00:02:39.592 ++ VERSION='39 (Cloud Edition)' 00:02:39.592 ++ ID=fedora 00:02:39.592 ++ VERSION_ID=39 00:02:39.592 ++ VERSION_CODENAME= 00:02:39.592 ++ PLATFORM_ID=platform:f39 00:02:39.592 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:39.592 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:39.592 ++ LOGO=fedora-logo-icon 00:02:39.592 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:39.592 ++ HOME_URL=https://fedoraproject.org/ 00:02:39.592 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:39.592 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:39.592 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:39.592 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:39.592 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:39.592 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:39.592 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:39.592 ++ SUPPORT_END=2024-11-12 00:02:39.592 ++ VARIANT='Cloud Edition' 00:02:39.592 ++ VARIANT_ID=cloud 00:02:39.592 + uname -a 00:02:39.592 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:39.592 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:40.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:40.157 Hugepages 00:02:40.157 node hugesize free / total 00:02:40.157 node0 1048576kB 0 / 0 00:02:40.157 node0 2048kB 0 / 0 00:02:40.157 00:02:40.157 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:40.415 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:40.415 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:40.415 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:40.415 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:40.415 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:40.415 + rm -f /tmp/spdk-ld-path 00:02:40.415 + source autorun-spdk.conf 00:02:40.415 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.415 ++ SPDK_TEST_NVME=1 00:02:40.415 ++ SPDK_TEST_FTL=1 00:02:40.415 ++ SPDK_TEST_ISAL=1 00:02:40.415 ++ SPDK_RUN_ASAN=1 00:02:40.415 ++ SPDK_RUN_UBSAN=1 00:02:40.415 ++ SPDK_TEST_XNVME=1 00:02:40.415 ++ SPDK_TEST_NVME_FDP=1 00:02:40.415 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:40.415 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:40.415 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.415 ++ RUN_NIGHTLY=1 00:02:40.415 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:40.415 + [[ -n '' ]] 00:02:40.415 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:40.415 + for M in /var/spdk/build-*-manifest.txt 00:02:40.415 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:40.415 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.415 + for M in /var/spdk/build-*-manifest.txt 00:02:40.415 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:40.415 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.415 + for M in /var/spdk/build-*-manifest.txt 00:02:40.415 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:40.415 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.415 ++ uname 00:02:40.415 + [[ Linux == \L\i\n\u\x ]] 00:02:40.415 + sudo dmesg -T 00:02:40.415 + sudo dmesg --clear 00:02:40.415 + dmesg_pid=5770 00:02:40.415 + [[ Fedora Linux == FreeBSD ]] 00:02:40.415 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.415 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.415 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:40.415 + sudo dmesg -Tw 00:02:40.415 + [[ -x /usr/src/fio-static/fio ]] 00:02:40.415 + export FIO_BIN=/usr/src/fio-static/fio 00:02:40.415 + FIO_BIN=/usr/src/fio-static/fio 00:02:40.415 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:40.415 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:40.415 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:40.415 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.415 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.415 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:40.415 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.415 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.415 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.415 23:36:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:40.415 23:36:28 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.415 23:36:28 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:40.415 23:36:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:40.415 23:36:28 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.673 23:36:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:40.673 23:36:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:40.673 23:36:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:40.673 23:36:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:40.673 23:36:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:40.673 23:36:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:40.673 23:36:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.673 23:36:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.673 23:36:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.673 23:36:28 -- paths/export.sh@5 -- $ export PATH 00:02:40.673 23:36:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.673 23:36:28 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:40.673 23:36:28 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:40.673 23:36:28 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732664188.XXXXXX 00:02:40.673 23:36:28 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732664188.ZwO4t6 00:02:40.673 23:36:28 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:40.673 23:36:28 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:40.673 23:36:28 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:40.673 23:36:28 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:40.673 23:36:28 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:40.673 23:36:28 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:40.673 23:36:28 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:40.673 23:36:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:40.673 23:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.673 23:36:28 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:40.673 23:36:28 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:40.673 23:36:28 -- pm/common@17 -- $ local monitor 00:02:40.673 23:36:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.673 23:36:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.673 23:36:28 -- pm/common@25 -- $ sleep 1 00:02:40.673 23:36:28 -- pm/common@21 -- $ date +%s 00:02:40.673 23:36:28 -- pm/common@21 -- $ date +%s 00:02:40.673 23:36:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732664188 00:02:40.673 23:36:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732664188 00:02:40.673 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732664188_collect-cpu-load.pm.log 00:02:40.673 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732664188_collect-vmstat.pm.log 00:02:41.617 23:36:29 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:41.617 23:36:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:41.617 23:36:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:41.617 23:36:29 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:41.617 23:36:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:41.617 Tue Nov 26 11:36:29 PM UTC 2024 00:02:41.617 23:36:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:41.617 v25.01-pre-271-g2f2acf4eb 00:02:41.617 23:36:29 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:41.617 23:36:29 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:41.617 23:36:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:41.617 23:36:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:41.617 23:36:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.617 ************************************ 00:02:41.617 START TEST asan 00:02:41.617 ************************************ 00:02:41.617 using asan 00:02:41.617 23:36:29 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:41.617 00:02:41.617 real 0m0.000s 00:02:41.617 user 0m0.000s 00:02:41.617 sys 0m0.000s 00:02:41.617 23:36:29 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:41.617 23:36:29 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:41.617 ************************************ 00:02:41.617 END TEST asan 00:02:41.617 ************************************ 00:02:41.617 23:36:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:41.617 23:36:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:41.617 23:36:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:41.617 23:36:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:41.617 23:36:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.617 ************************************ 00:02:41.617 START TEST ubsan 00:02:41.617 ************************************ 00:02:41.617 using ubsan 00:02:41.617 23:36:29 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:41.617 00:02:41.617 real 0m0.000s 00:02:41.617 user 0m0.000s 00:02:41.617 sys 0m0.000s 00:02:41.617 23:36:29 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:41.617 ************************************ 00:02:41.617 23:36:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:41.617 END TEST ubsan 00:02:41.617 ************************************ 00:02:41.617 23:36:29 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:41.617 23:36:29 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:41.617 23:36:29 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:41.617 23:36:29 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:41.617 23:36:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:41.617 23:36:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.617 ************************************ 00:02:41.617 START TEST build_native_dpdk 00:02:41.617 ************************************ 00:02:41.617 23:36:29 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:41.617 caf0f5d395 version: 22.11.4 00:02:41.617 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:41.617 dc9c799c7d vhost: fix missing spinlock unlock 00:02:41.617 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:41.617 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:41.617 23:36:29 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:41.617 23:36:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:41.875 patching file config/rte_config.h 00:02:41.875 Hunk #1 succeeded at 60 (offset 1 line). 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:41.875 patching file lib/pcapng/rte_pcapng.c 00:02:41.875 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:41.875 23:36:29 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:41.875 23:36:29 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:46.062 The Meson build system 00:02:46.062 Version: 1.5.0 00:02:46.062 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:46.062 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:46.062 Build type: native build 00:02:46.062 Program cat found: YES (/usr/bin/cat) 00:02:46.062 Project name: DPDK 00:02:46.062 Project version: 22.11.4 00:02:46.062 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:46.062 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:46.062 Host machine cpu family: x86_64 00:02:46.062 Host machine cpu: x86_64 00:02:46.062 Message: ## Building in Developer Mode ## 00:02:46.062 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:46.062 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:46.062 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:46.062 Program objdump found: YES (/usr/bin/objdump) 00:02:46.062 Program python3 found: YES (/usr/bin/python3) 00:02:46.062 Program cat found: YES (/usr/bin/cat) 00:02:46.062 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:46.062 Checking for size of "void *" : 8 00:02:46.062 Checking for size of "void *" : 8 (cached) 00:02:46.062 Library m found: YES 00:02:46.062 Library numa found: YES 00:02:46.062 Has header "numaif.h" : YES 00:02:46.062 Library fdt found: NO 00:02:46.062 Library execinfo found: NO 00:02:46.062 Has header "execinfo.h" : YES 00:02:46.062 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:46.062 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:46.062 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:46.062 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:46.062 Run-time dependency openssl found: YES 3.1.1 00:02:46.062 Run-time dependency libpcap found: YES 1.10.4 00:02:46.062 Has header "pcap.h" with dependency libpcap: YES 00:02:46.062 Compiler for C supports arguments -Wcast-qual: YES 00:02:46.062 Compiler for C supports arguments -Wdeprecated: YES 00:02:46.062 Compiler for C supports arguments -Wformat: YES 00:02:46.062 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:46.062 Compiler for C supports arguments -Wformat-security: NO 00:02:46.062 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:46.062 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:46.062 Compiler for C supports arguments -Wnested-externs: YES 00:02:46.062 Compiler for C supports arguments -Wold-style-definition: YES 00:02:46.062 Compiler for C supports arguments -Wpointer-arith: YES 00:02:46.062 Compiler for C supports arguments -Wsign-compare: YES 00:02:46.062 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:46.062 Compiler for C supports arguments -Wundef: YES 00:02:46.062 Compiler for C supports arguments -Wwrite-strings: YES 00:02:46.062 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:46.062 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:46.062 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:46.062 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:46.062 Compiler for C supports arguments -mavx512f: YES 00:02:46.062 Checking if "AVX512 checking" compiles: YES 00:02:46.062 Fetching value of define "__SSE4_2__" : 1 00:02:46.062 Fetching value of define "__AES__" : 1 00:02:46.062 Fetching value of define "__AVX__" : 1 00:02:46.062 Fetching value of define "__AVX2__" : 1 00:02:46.062 Fetching value of define "__AVX512BW__" : 1 00:02:46.062 Fetching value of define "__AVX512CD__" : 1 00:02:46.062 Fetching value of define "__AVX512DQ__" : 1 00:02:46.062 Fetching value of define "__AVX512F__" : 1 00:02:46.062 Fetching value of define "__AVX512VL__" : 1 00:02:46.062 Fetching value of define "__PCLMUL__" : 1 00:02:46.062 Fetching value of define "__RDRND__" : 1 00:02:46.062 Fetching value of define "__RDSEED__" : 1 00:02:46.062 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:46.062 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:46.062 Message: lib/kvargs: Defining dependency "kvargs" 00:02:46.062 Message: lib/telemetry: Defining dependency "telemetry" 00:02:46.062 Checking for function "getentropy" : YES 00:02:46.062 Message: lib/eal: Defining dependency "eal" 00:02:46.062 Message: lib/ring: Defining dependency "ring" 00:02:46.062 Message: lib/rcu: Defining dependency "rcu" 00:02:46.062 Message: lib/mempool: Defining dependency "mempool" 00:02:46.062 Message: lib/mbuf: Defining dependency "mbuf" 00:02:46.062 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:46.062 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.062 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.062 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:46.062 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:46.062 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:46.062 Compiler for C supports arguments -mpclmul: YES 00:02:46.062 Compiler for C supports arguments -maes: YES 00:02:46.062 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.062 Compiler for C supports arguments -mavx512bw: YES 00:02:46.062 Compiler for C supports arguments -mavx512dq: YES 00:02:46.062 Compiler for C supports arguments -mavx512vl: YES 00:02:46.062 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:46.062 Compiler for C supports arguments -mavx2: YES 00:02:46.062 Compiler for C supports arguments -mavx: YES 00:02:46.062 Message: lib/net: Defining dependency "net" 00:02:46.062 Message: lib/meter: Defining dependency "meter" 00:02:46.062 Message: lib/ethdev: Defining dependency "ethdev" 00:02:46.062 Message: lib/pci: Defining dependency "pci" 00:02:46.062 Message: lib/cmdline: Defining dependency "cmdline" 00:02:46.062 Message: lib/metrics: Defining dependency "metrics" 00:02:46.062 Message: lib/hash: Defining dependency "hash" 00:02:46.062 Message: lib/timer: Defining dependency "timer" 00:02:46.062 Fetching value of define "__AVX2__" : 1 (cached) 00:02:46.062 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.062 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:46.062 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:46.062 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.062 Message: lib/acl: Defining dependency "acl" 00:02:46.062 Message: lib/bbdev: Defining dependency "bbdev" 00:02:46.062 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:46.062 Run-time dependency libelf found: YES 0.191 00:02:46.062 Message: lib/bpf: Defining dependency "bpf" 00:02:46.062 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:46.062 Message: lib/compressdev: Defining dependency "compressdev" 00:02:46.062 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:46.062 Message: lib/distributor: Defining dependency "distributor" 00:02:46.062 Message: lib/efd: Defining dependency "efd" 00:02:46.063 Message: lib/eventdev: Defining dependency "eventdev" 00:02:46.063 Message: lib/gpudev: Defining dependency "gpudev" 00:02:46.063 Message: lib/gro: Defining dependency "gro" 00:02:46.063 Message: lib/gso: Defining dependency "gso" 00:02:46.063 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:46.063 Message: lib/jobstats: Defining dependency "jobstats" 00:02:46.063 Message: lib/latencystats: Defining dependency "latencystats" 00:02:46.063 Message: lib/lpm: Defining dependency "lpm" 00:02:46.063 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.063 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:46.063 Fetching value of define "__AVX512IFMA__" : 1 00:02:46.063 Message: lib/member: Defining dependency "member" 00:02:46.063 Message: lib/pcapng: Defining dependency "pcapng" 00:02:46.063 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:46.063 Message: lib/power: Defining dependency "power" 00:02:46.063 Message: lib/rawdev: Defining dependency "rawdev" 00:02:46.063 Message: lib/regexdev: Defining dependency "regexdev" 00:02:46.063 Message: lib/dmadev: Defining dependency "dmadev" 00:02:46.063 Message: lib/rib: Defining dependency "rib" 00:02:46.063 Message: lib/reorder: Defining dependency "reorder" 00:02:46.063 Message: lib/sched: Defining dependency "sched" 00:02:46.063 Message: lib/security: Defining dependency "security" 00:02:46.063 Message: lib/stack: Defining dependency "stack" 00:02:46.063 Has header "linux/userfaultfd.h" : YES 00:02:46.063 Message: lib/vhost: Defining dependency "vhost" 00:02:46.063 Message: lib/ipsec: Defining dependency "ipsec" 00:02:46.063 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.063 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:46.063 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.063 Message: lib/fib: Defining dependency "fib" 00:02:46.063 Message: lib/port: Defining dependency "port" 00:02:46.063 Message: lib/pdump: Defining dependency "pdump" 00:02:46.063 Message: lib/table: Defining dependency "table" 00:02:46.063 Message: lib/pipeline: Defining dependency "pipeline" 00:02:46.063 Message: lib/graph: Defining dependency "graph" 00:02:46.063 Message: lib/node: Defining dependency "node" 00:02:46.063 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:46.063 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:46.063 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:46.063 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:46.063 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:46.063 Compiler for C supports arguments -Wno-unused-value: YES 00:02:46.063 Compiler for C supports arguments -Wno-format: YES 00:02:46.063 Compiler for C supports arguments -Wno-format-security: YES 00:02:46.063 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:46.063 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:46.063 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:46.063 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:46.996 Fetching value of define "__AVX2__" : 1 (cached) 00:02:46.996 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.996 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.996 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.996 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:46.996 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:46.996 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:46.996 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:46.996 Configuring doxy-api.conf using configuration 00:02:46.996 Program sphinx-build found: NO 00:02:46.996 Configuring rte_build_config.h using configuration 00:02:46.996 Message: 00:02:46.996 ================= 00:02:46.996 Applications Enabled 00:02:46.996 ================= 00:02:46.996 00:02:46.996 apps: 00:02:46.996 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:46.996 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:46.996 test-security-perf, 00:02:46.996 00:02:46.996 Message: 00:02:46.996 ================= 00:02:46.996 Libraries Enabled 00:02:46.996 ================= 00:02:46.996 00:02:46.996 libs: 00:02:46.996 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:46.996 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:46.996 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:46.996 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:46.996 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:46.996 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:46.996 table, pipeline, graph, node, 00:02:46.996 00:02:46.996 Message: 00:02:46.996 =============== 00:02:46.996 Drivers Enabled 00:02:46.996 =============== 00:02:46.996 00:02:46.996 common: 00:02:46.996 00:02:46.996 bus: 00:02:46.996 pci, vdev, 00:02:46.996 mempool: 00:02:46.996 ring, 00:02:46.996 dma: 00:02:46.996 00:02:46.996 net: 00:02:46.996 i40e, 00:02:46.996 raw: 00:02:46.996 00:02:46.996 crypto: 00:02:46.996 00:02:46.996 compress: 00:02:46.996 00:02:46.996 regex: 00:02:46.996 00:02:46.996 vdpa: 00:02:46.996 00:02:46.996 event: 00:02:46.996 00:02:46.996 baseband: 00:02:46.996 00:02:46.996 gpu: 00:02:46.996 00:02:46.996 00:02:46.996 Message: 00:02:46.997 ================= 00:02:46.997 Content Skipped 00:02:46.997 ================= 00:02:46.997 00:02:46.997 apps: 00:02:46.997 00:02:46.997 libs: 00:02:46.997 kni: explicitly disabled via build config (deprecated lib) 00:02:46.997 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:46.997 00:02:46.997 drivers: 00:02:46.997 common/cpt: not in enabled drivers build config 00:02:46.997 common/dpaax: not in enabled drivers build config 00:02:46.997 common/iavf: not in enabled drivers build config 00:02:46.997 common/idpf: not in enabled drivers build config 00:02:46.997 common/mvep: not in enabled drivers build config 00:02:46.997 common/octeontx: not in enabled drivers build config 00:02:46.997 bus/auxiliary: not in enabled drivers build config 00:02:46.997 bus/dpaa: not in enabled drivers build config 00:02:46.997 bus/fslmc: not in enabled drivers build config 00:02:46.997 bus/ifpga: not in enabled drivers build config 00:02:46.997 bus/vmbus: not in enabled drivers build config 00:02:46.997 common/cnxk: not in enabled drivers build config 00:02:46.997 common/mlx5: not in enabled drivers build config 00:02:46.997 common/qat: not in enabled drivers build config 00:02:46.997 common/sfc_efx: not in enabled drivers build config 00:02:46.997 mempool/bucket: not in enabled drivers build config 00:02:46.997 mempool/cnxk: not in enabled drivers build config 00:02:46.997 mempool/dpaa: not in enabled drivers build config 00:02:46.997 mempool/dpaa2: not in enabled drivers build config 00:02:46.997 mempool/octeontx: not in enabled drivers build config 00:02:46.997 mempool/stack: not in enabled drivers build config 00:02:46.997 dma/cnxk: not in enabled drivers build config 00:02:46.997 dma/dpaa: not in enabled drivers build config 00:02:46.997 dma/dpaa2: not in enabled drivers build config 00:02:46.997 dma/hisilicon: not in enabled drivers build config 00:02:46.997 dma/idxd: not in enabled drivers build config 00:02:46.997 dma/ioat: not in enabled drivers build config 00:02:46.997 dma/skeleton: not in enabled drivers build config 00:02:46.997 net/af_packet: not in enabled drivers build config 00:02:46.997 net/af_xdp: not in enabled drivers build config 00:02:46.997 net/ark: not in enabled drivers build config 00:02:46.997 net/atlantic: not in enabled drivers build config 00:02:46.997 net/avp: not in enabled drivers build config 00:02:46.997 net/axgbe: not in enabled drivers build config 00:02:46.997 net/bnx2x: not in enabled drivers build config 00:02:46.997 net/bnxt: not in enabled drivers build config 00:02:46.997 net/bonding: not in enabled drivers build config 00:02:46.997 net/cnxk: not in enabled drivers build config 00:02:46.997 net/cxgbe: not in enabled drivers build config 00:02:46.997 net/dpaa: not in enabled drivers build config 00:02:46.997 net/dpaa2: not in enabled drivers build config 00:02:46.997 net/e1000: not in enabled drivers build config 00:02:46.997 net/ena: not in enabled drivers build config 00:02:46.997 net/enetc: not in enabled drivers build config 00:02:46.997 net/enetfec: not in enabled drivers build config 00:02:46.997 net/enic: not in enabled drivers build config 00:02:46.997 net/failsafe: not in enabled drivers build config 00:02:46.997 net/fm10k: not in enabled drivers build config 00:02:46.997 net/gve: not in enabled drivers build config 00:02:46.997 net/hinic: not in enabled drivers build config 00:02:46.997 net/hns3: not in enabled drivers build config 00:02:46.997 net/iavf: not in enabled drivers build config 00:02:46.997 net/ice: not in enabled drivers build config 00:02:46.997 net/idpf: not in enabled drivers build config 00:02:46.997 net/igc: not in enabled drivers build config 00:02:46.997 net/ionic: not in enabled drivers build config 00:02:46.997 net/ipn3ke: not in enabled drivers build config 00:02:46.997 net/ixgbe: not in enabled drivers build config 00:02:46.997 net/kni: not in enabled drivers build config 00:02:46.997 net/liquidio: not in enabled drivers build config 00:02:46.997 net/mana: not in enabled drivers build config 00:02:46.997 net/memif: not in enabled drivers build config 00:02:46.997 net/mlx4: not in enabled drivers build config 00:02:46.997 net/mlx5: not in enabled drivers build config 00:02:46.997 net/mvneta: not in enabled drivers build config 00:02:46.997 net/mvpp2: not in enabled drivers build config 00:02:46.997 net/netvsc: not in enabled drivers build config 00:02:46.997 net/nfb: not in enabled drivers build config 00:02:46.997 net/nfp: not in enabled drivers build config 00:02:46.997 net/ngbe: not in enabled drivers build config 00:02:46.997 net/null: not in enabled drivers build config 00:02:46.997 net/octeontx: not in enabled drivers build config 00:02:46.997 net/octeon_ep: not in enabled drivers build config 00:02:46.997 net/pcap: not in enabled drivers build config 00:02:46.997 net/pfe: not in enabled drivers build config 00:02:46.997 net/qede: not in enabled drivers build config 00:02:46.997 net/ring: not in enabled drivers build config 00:02:46.997 net/sfc: not in enabled drivers build config 00:02:46.997 net/softnic: not in enabled drivers build config 00:02:46.997 net/tap: not in enabled drivers build config 00:02:46.997 net/thunderx: not in enabled drivers build config 00:02:46.997 net/txgbe: not in enabled drivers build config 00:02:46.997 net/vdev_netvsc: not in enabled drivers build config 00:02:46.997 net/vhost: not in enabled drivers build config 00:02:46.997 net/virtio: not in enabled drivers build config 00:02:46.997 net/vmxnet3: not in enabled drivers build config 00:02:46.997 raw/cnxk_bphy: not in enabled drivers build config 00:02:46.997 raw/cnxk_gpio: not in enabled drivers build config 00:02:46.997 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:46.997 raw/ifpga: not in enabled drivers build config 00:02:46.997 raw/ntb: not in enabled drivers build config 00:02:46.997 raw/skeleton: not in enabled drivers build config 00:02:46.997 crypto/armv8: not in enabled drivers build config 00:02:46.997 crypto/bcmfs: not in enabled drivers build config 00:02:46.997 crypto/caam_jr: not in enabled drivers build config 00:02:46.997 crypto/ccp: not in enabled drivers build config 00:02:46.997 crypto/cnxk: not in enabled drivers build config 00:02:46.997 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.997 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.997 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.997 crypto/mlx5: not in enabled drivers build config 00:02:46.997 crypto/mvsam: not in enabled drivers build config 00:02:46.997 crypto/nitrox: not in enabled drivers build config 00:02:46.997 crypto/null: not in enabled drivers build config 00:02:46.997 crypto/octeontx: not in enabled drivers build config 00:02:46.997 crypto/openssl: not in enabled drivers build config 00:02:46.997 crypto/scheduler: not in enabled drivers build config 00:02:46.997 crypto/uadk: not in enabled drivers build config 00:02:46.997 crypto/virtio: not in enabled drivers build config 00:02:46.997 compress/isal: not in enabled drivers build config 00:02:46.997 compress/mlx5: not in enabled drivers build config 00:02:46.997 compress/octeontx: not in enabled drivers build config 00:02:46.997 compress/zlib: not in enabled drivers build config 00:02:46.997 regex/mlx5: not in enabled drivers build config 00:02:46.997 regex/cn9k: not in enabled drivers build config 00:02:46.997 vdpa/ifc: not in enabled drivers build config 00:02:46.997 vdpa/mlx5: not in enabled drivers build config 00:02:46.997 vdpa/sfc: not in enabled drivers build config 00:02:46.997 event/cnxk: not in enabled drivers build config 00:02:46.997 event/dlb2: not in enabled drivers build config 00:02:46.997 event/dpaa: not in enabled drivers build config 00:02:46.997 event/dpaa2: not in enabled drivers build config 00:02:46.997 event/dsw: not in enabled drivers build config 00:02:46.997 event/opdl: not in enabled drivers build config 00:02:46.997 event/skeleton: not in enabled drivers build config 00:02:46.997 event/sw: not in enabled drivers build config 00:02:46.997 event/octeontx: not in enabled drivers build config 00:02:46.997 baseband/acc: not in enabled drivers build config 00:02:46.997 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:46.997 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:46.997 baseband/la12xx: not in enabled drivers build config 00:02:46.997 baseband/null: not in enabled drivers build config 00:02:46.997 baseband/turbo_sw: not in enabled drivers build config 00:02:46.997 gpu/cuda: not in enabled drivers build config 00:02:46.997 00:02:46.997 00:02:46.997 Build targets in project: 309 00:02:46.997 00:02:46.997 DPDK 22.11.4 00:02:46.997 00:02:46.997 User defined options 00:02:46.997 libdir : lib 00:02:46.997 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:46.997 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:46.997 c_link_args : 00:02:46.997 enable_docs : false 00:02:46.997 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:46.997 enable_kmods : false 00:02:46.997 machine : native 00:02:46.997 tests : false 00:02:46.997 00:02:46.997 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.997 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:46.997 23:36:35 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:46.997 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:46.997 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:46.997 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:46.997 [3/738] Generating lib/rte_telemetry_def with a custom command 00:02:46.997 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:47.255 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:47.255 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:47.255 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:47.255 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:47.255 [9/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:47.255 [10/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:47.255 [11/738] Linking static target lib/librte_kvargs.a 00:02:47.255 [12/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:47.255 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.255 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:47.255 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:47.255 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:47.255 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:47.255 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:47.255 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:47.512 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.512 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:47.512 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:47.512 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:47.512 [24/738] Linking target lib/librte_kvargs.so.23.0 00:02:47.512 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:47.512 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.512 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.512 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.512 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.512 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:47.512 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.512 [32/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:47.512 [33/738] Linking static target lib/librte_telemetry.a 00:02:47.512 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.769 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:47.769 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.769 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.769 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.769 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.769 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.769 [41/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:47.769 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.769 [43/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.769 [44/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:48.026 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.026 [46/738] Linking target lib/librte_telemetry.so.23.0 00:02:48.026 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:48.026 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:48.026 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.026 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:48.026 [51/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:48.026 [52/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.026 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:48.026 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.026 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:48.026 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.026 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.026 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.026 [59/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.026 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.026 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.026 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.026 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.026 [64/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.026 [65/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:48.283 [66/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.283 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.283 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.283 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.283 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.283 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.283 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:48.283 [73/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.283 [74/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.283 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.283 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.283 [77/738] Generating lib/rte_eal_mingw with a custom command 00:02:48.283 [78/738] Generating lib/rte_eal_def with a custom command 00:02:48.283 [79/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:48.283 [80/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.283 [81/738] Generating lib/rte_ring_def with a custom command 00:02:48.283 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:48.283 [83/738] Generating lib/rte_rcu_def with a custom command 00:02:48.283 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:48.283 [85/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:48.283 [86/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.539 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.539 [88/738] Linking static target lib/librte_ring.a 00:02:48.539 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.539 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:48.539 [91/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.540 [92/738] Generating lib/rte_mempool_mingw with a custom command 00:02:48.540 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.540 [94/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.540 [95/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.797 [96/738] Linking static target lib/librte_eal.a 00:02:48.797 [97/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:48.797 [98/738] Generating lib/rte_mbuf_def with a custom command 00:02:48.797 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:48.797 [100/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.797 [101/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:48.797 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:48.797 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.054 [104/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.054 [105/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.054 [106/738] Linking static target lib/librte_rcu.a 00:02:49.054 [107/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.054 [108/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.054 [109/738] Generating lib/rte_net_def with a custom command 00:02:49.054 [110/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.054 [111/738] Linking static target lib/librte_mempool.a 00:02:49.054 [112/738] Generating lib/rte_net_mingw with a custom command 00:02:49.054 [113/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.054 [114/738] Generating lib/rte_meter_def with a custom command 00:02:49.311 [115/738] Generating lib/rte_meter_mingw with a custom command 00:02:49.311 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.311 [117/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.311 [118/738] Linking static target lib/librte_meter.a 00:02:49.311 [119/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.311 [120/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.311 [121/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:49.311 [122/738] Linking static target lib/librte_net.a 00:02:49.311 [123/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.311 [124/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.311 [125/738] Linking static target lib/librte_mbuf.a 00:02:49.567 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.567 [127/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.567 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:49.567 [129/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.567 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:49.823 [131/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.823 [132/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.823 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.823 [134/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.080 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.080 [136/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.080 [137/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:50.080 [138/738] Generating lib/rte_ethdev_def with a custom command 00:02:50.080 [139/738] Generating lib/rte_pci_def with a custom command 00:02:50.080 [140/738] Generating lib/rte_pci_mingw with a custom command 00:02:50.080 [141/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.080 [142/738] Linking static target lib/librte_pci.a 00:02:50.080 [143/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.080 [144/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.080 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.338 [146/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.338 [147/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:50.338 [148/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.338 [149/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.338 [150/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.338 [151/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:50.338 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.338 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.338 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.338 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.338 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.338 [157/738] Generating lib/rte_cmdline_def with a custom command 00:02:50.338 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:50.338 [159/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:50.338 [160/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.338 [161/738] Generating lib/rte_metrics_def with a custom command 00:02:50.338 [162/738] Generating lib/rte_metrics_mingw with a custom command 00:02:50.338 [163/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:50.338 [164/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.596 [165/738] Generating lib/rte_hash_def with a custom command 00:02:50.596 [166/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:50.596 [167/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:50.596 [168/738] Linking static target lib/librte_cmdline.a 00:02:50.596 [169/738] Generating lib/rte_timer_def with a custom command 00:02:50.596 [170/738] Generating lib/rte_hash_mingw with a custom command 00:02:50.596 [171/738] Generating lib/rte_timer_mingw with a custom command 00:02:50.596 [172/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.596 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:50.596 [174/738] Linking static target lib/librte_metrics.a 00:02:50.855 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.855 [176/738] Linking static target lib/librte_timer.a 00:02:50.855 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.855 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:50.855 [179/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:51.113 [180/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:51.113 [181/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.113 [182/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.113 [183/738] Generating lib/rte_acl_def with a custom command 00:02:51.113 [184/738] Generating lib/rte_acl_mingw with a custom command 00:02:51.113 [185/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:51.113 [186/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:51.113 [187/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.113 [188/738] Generating lib/rte_bbdev_def with a custom command 00:02:51.113 [189/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:51.113 [190/738] Linking static target lib/librte_ethdev.a 00:02:51.371 [191/738] Generating lib/rte_bitratestats_def with a custom command 00:02:51.371 [192/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:51.628 [193/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:51.628 [194/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:51.628 [195/738] Linking static target lib/librte_bitratestats.a 00:02:51.628 [196/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:51.628 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:51.629 [198/738] Linking static target lib/librte_bbdev.a 00:02:51.629 [199/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.887 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:51.887 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:52.154 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:52.154 [203/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.154 [204/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:52.154 [205/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.154 [206/738] Linking static target lib/librte_hash.a 00:02:52.154 [207/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:52.411 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:52.411 [209/738] Generating lib/rte_bpf_def with a custom command 00:02:52.411 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:02:52.669 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:52.669 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:02:52.669 [213/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:52.669 [214/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:52.669 [215/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.669 [216/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:52.669 [217/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:52.669 [218/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:52.669 [219/738] Generating lib/rte_compressdev_def with a custom command 00:02:52.669 [220/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:52.669 [221/738] Linking static target lib/librte_cfgfile.a 00:02:52.926 [222/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:52.926 [223/738] Linking static target lib/librte_bpf.a 00:02:52.926 [224/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.926 [225/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.926 [226/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.926 [227/738] Generating lib/rte_cryptodev_def with a custom command 00:02:52.926 [228/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:52.926 [229/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.926 [230/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.926 [231/738] Linking static target lib/librte_compressdev.a 00:02:53.183 [232/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.183 [233/738] Generating lib/rte_distributor_def with a custom command 00:02:53.183 [234/738] Generating lib/rte_distributor_mingw with a custom command 00:02:53.183 [235/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.183 [236/738] Generating lib/rte_efd_def with a custom command 00:02:53.183 [237/738] Generating lib/rte_efd_mingw with a custom command 00:02:53.183 [238/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:53.183 [239/738] Linking static target lib/librte_acl.a 00:02:53.440 [240/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:53.440 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:53.440 [242/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.440 [243/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:53.440 [244/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.440 [245/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:53.440 [246/738] Linking target lib/librte_eal.so.23.0 00:02:53.440 [247/738] Linking static target lib/librte_distributor.a 00:02:53.697 [248/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.697 [249/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:53.697 [250/738] Linking target lib/librte_ring.so.23.0 00:02:53.697 [251/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.697 [252/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:53.697 [253/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:53.697 [254/738] Linking target lib/librte_meter.so.23.0 00:02:53.697 [255/738] Linking target lib/librte_pci.so.23.0 00:02:53.697 [256/738] Linking target lib/librte_rcu.so.23.0 00:02:53.697 [257/738] Linking target lib/librte_mempool.so.23.0 00:02:54.030 [258/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:54.030 [259/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:54.030 [260/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:54.030 [261/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:54.030 [262/738] Linking target lib/librte_timer.so.23.0 00:02:54.030 [263/738] Linking target lib/librte_mbuf.so.23.0 00:02:54.030 [264/738] Linking target lib/librte_acl.so.23.0 00:02:54.030 [265/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:54.030 [266/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:54.030 [267/738] Linking target lib/librte_cfgfile.so.23.0 00:02:54.030 [268/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:54.030 [269/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:54.030 [270/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:54.030 [271/738] Linking target lib/librte_net.so.23.0 00:02:54.030 [272/738] Linking target lib/librte_compressdev.so.23.0 00:02:54.030 [273/738] Linking target lib/librte_bbdev.so.23.0 00:02:54.030 [274/738] Linking static target lib/librte_efd.a 00:02:54.030 [275/738] Linking target lib/librte_distributor.so.23.0 00:02:54.030 [276/738] Generating lib/rte_eventdev_def with a custom command 00:02:54.030 [277/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:54.030 [278/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:54.287 [279/738] Linking target lib/librte_cmdline.so.23.0 00:02:54.287 [280/738] Linking target lib/librte_hash.so.23.0 00:02:54.287 [281/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:54.287 [282/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.287 [283/738] Generating lib/rte_gpudev_def with a custom command 00:02:54.287 [284/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:54.287 [285/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:54.287 [286/738] Linking target lib/librte_efd.so.23.0 00:02:54.287 [287/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:54.544 [288/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.544 [289/738] Linking target lib/librte_ethdev.so.23.0 00:02:54.544 [290/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:54.544 [291/738] Linking static target lib/librte_gpudev.a 00:02:54.544 [292/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:54.544 [293/738] Linking target lib/librte_metrics.so.23.0 00:02:54.802 [294/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:54.802 [295/738] Linking target lib/librte_bpf.so.23.0 00:02:54.802 [296/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:54.802 [297/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:54.802 [298/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:54.802 [299/738] Generating lib/rte_gro_def with a custom command 00:02:54.802 [300/738] Generating lib/rte_gro_mingw with a custom command 00:02:54.802 [301/738] Linking target lib/librte_bitratestats.so.23.0 00:02:54.802 [302/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:54.802 [303/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:54.802 [304/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.802 [305/738] Linking static target lib/librte_cryptodev.a 00:02:55.060 [306/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:55.060 [307/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:55.060 [308/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:55.060 [309/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:55.060 [310/738] Generating lib/rte_gso_mingw with a custom command 00:02:55.060 [311/738] Generating lib/rte_gso_def with a custom command 00:02:55.060 [312/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:55.060 [313/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:55.060 [314/738] Linking static target lib/librte_gro.a 00:02:55.060 [315/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:55.334 [316/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.334 [317/738] Linking static target lib/librte_eventdev.a 00:02:55.334 [318/738] Linking target lib/librte_gpudev.so.23.0 00:02:55.334 [319/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:55.334 [320/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:55.334 [321/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:55.334 [322/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.334 [323/738] Linking static target lib/librte_gso.a 00:02:55.334 [324/738] Linking target lib/librte_gro.so.23.0 00:02:55.334 [325/738] Generating lib/rte_ip_frag_def with a custom command 00:02:55.334 [326/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:55.334 [327/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:55.334 [328/738] Generating lib/rte_jobstats_def with a custom command 00:02:55.595 [329/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.595 [330/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:55.595 [331/738] Linking target lib/librte_gso.so.23.0 00:02:55.595 [332/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:55.595 [333/738] Linking static target lib/librte_jobstats.a 00:02:55.595 [334/738] Generating lib/rte_latencystats_def with a custom command 00:02:55.595 [335/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:55.595 [336/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:55.595 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:55.595 [338/738] Generating lib/rte_lpm_def with a custom command 00:02:55.595 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:02:55.595 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:55.595 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:55.596 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:55.596 [343/738] Linking static target lib/librte_ip_frag.a 00:02:55.853 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.853 [345/738] Linking target lib/librte_jobstats.so.23.0 00:02:55.853 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:55.853 [347/738] Linking static target lib/librte_latencystats.a 00:02:55.853 [348/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.853 [349/738] Linking target lib/librte_ip_frag.so.23.0 00:02:55.853 [350/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:56.110 [351/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:56.110 [352/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.110 [353/738] Generating lib/rte_member_def with a custom command 00:02:56.110 [354/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:56.110 [355/738] Linking target lib/librte_latencystats.so.23.0 00:02:56.110 [356/738] Generating lib/rte_member_mingw with a custom command 00:02:56.110 [357/738] Generating lib/rte_pcapng_def with a custom command 00:02:56.110 [358/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:56.110 [359/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:56.110 [360/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:56.368 [361/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:56.368 [362/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:56.368 [363/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.368 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:56.368 [365/738] Linking target lib/librte_cryptodev.so.23.0 00:02:56.368 [366/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.626 [367/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:56.626 [368/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:56.626 [369/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:56.626 [370/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:56.626 [371/738] Linking static target lib/librte_lpm.a 00:02:56.626 [372/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:56.626 [373/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:56.626 [374/738] Linking static target lib/librte_pcapng.a 00:02:56.626 [375/738] Linking target lib/librte_eventdev.so.23.0 00:02:56.626 [376/738] Generating lib/rte_power_def with a custom command 00:02:56.626 [377/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:56.626 [378/738] Generating lib/rte_power_mingw with a custom command 00:02:56.626 [379/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:56.626 [380/738] Generating lib/rte_rawdev_def with a custom command 00:02:56.626 [381/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:56.626 [382/738] Generating lib/rte_regexdev_def with a custom command 00:02:56.626 [383/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:56.626 [384/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:56.626 [385/738] Generating lib/rte_dmadev_def with a custom command 00:02:56.626 [386/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:56.883 [387/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.884 [388/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.884 [389/738] Linking target lib/librte_pcapng.so.23.0 00:02:56.884 [390/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:56.884 [391/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:56.884 [392/738] Linking target lib/librte_lpm.so.23.0 00:02:56.884 [393/738] Linking static target lib/librte_rawdev.a 00:02:56.884 [394/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:56.884 [395/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:56.884 [396/738] Generating lib/rte_rib_def with a custom command 00:02:56.884 [397/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:56.884 [398/738] Generating lib/rte_rib_mingw with a custom command 00:02:56.884 [399/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:56.884 [400/738] Linking static target lib/librte_dmadev.a 00:02:56.884 [401/738] Generating lib/rte_reorder_mingw with a custom command 00:02:56.884 [402/738] Generating lib/rte_reorder_def with a custom command 00:02:57.140 [403/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:57.140 [404/738] Linking static target lib/librte_member.a 00:02:57.140 [405/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:57.140 [406/738] Linking static target lib/librte_power.a 00:02:57.140 [407/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:57.140 [408/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:57.140 [409/738] Linking static target lib/librte_regexdev.a 00:02:57.140 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:57.141 [411/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.141 [412/738] Linking target lib/librte_rawdev.so.23.0 00:02:57.141 [413/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:57.141 [414/738] Generating lib/rte_sched_def with a custom command 00:02:57.141 [415/738] Generating lib/rte_sched_mingw with a custom command 00:02:57.141 [416/738] Generating lib/rte_security_def with a custom command 00:02:57.141 [417/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.398 [418/738] Generating lib/rte_security_mingw with a custom command 00:02:57.398 [419/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:57.398 [420/738] Linking target lib/librte_member.so.23.0 00:02:57.398 [421/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.398 [422/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:57.398 [423/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.398 [424/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:57.398 [425/738] Linking static target lib/librte_reorder.a 00:02:57.398 [426/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:57.398 [427/738] Linking target lib/librte_dmadev.so.23.0 00:02:57.399 [428/738] Linking static target lib/librte_stack.a 00:02:57.399 [429/738] Generating lib/rte_stack_def with a custom command 00:02:57.399 [430/738] Generating lib/rte_stack_mingw with a custom command 00:02:57.399 [431/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:57.399 [432/738] Linking static target lib/librte_rib.a 00:02:57.399 [433/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:57.656 [434/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.656 [435/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.656 [436/738] Linking target lib/librte_stack.so.23.0 00:02:57.656 [437/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.656 [438/738] Linking target lib/librte_reorder.so.23.0 00:02:57.656 [439/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.656 [440/738] Linking static target lib/librte_security.a 00:02:57.656 [441/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.656 [442/738] Linking target lib/librte_regexdev.so.23.0 00:02:57.656 [443/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.656 [444/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.914 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.914 [446/738] Linking target lib/librte_power.so.23.0 00:02:57.914 [447/738] Linking target lib/librte_rib.so.23.0 00:02:57.914 [448/738] Generating lib/rte_vhost_def with a custom command 00:02:57.914 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:02:57.914 [450/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.914 [451/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:57.914 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.914 [453/738] Linking target lib/librte_security.so.23.0 00:02:58.172 [454/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:58.172 [455/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:58.172 [456/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:58.172 [457/738] Linking static target lib/librte_sched.a 00:02:58.430 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:58.430 [459/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.430 [460/738] Linking target lib/librte_sched.so.23.0 00:02:58.430 [461/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:58.430 [462/738] Generating lib/rte_ipsec_def with a custom command 00:02:58.430 [463/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:58.430 [464/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:58.430 [465/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.688 [466/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:58.688 [467/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:58.688 [468/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.945 [469/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:58.945 [470/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:58.945 [471/738] Generating lib/rte_fib_def with a custom command 00:02:58.945 [472/738] Generating lib/rte_fib_mingw with a custom command 00:02:58.945 [473/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:58.945 [474/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:59.202 [475/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:59.202 [476/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:59.202 [477/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:59.202 [478/738] Linking static target lib/librte_fib.a 00:02:59.459 [479/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:59.459 [480/738] Linking static target lib/librte_ipsec.a 00:02:59.459 [481/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:59.459 [482/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:59.459 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:59.459 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:59.459 [485/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.460 [486/738] Linking target lib/librte_fib.so.23.0 00:02:59.460 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:59.718 [488/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.718 [489/738] Linking target lib/librte_ipsec.so.23.0 00:02:59.976 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:59.976 [491/738] Generating lib/rte_port_def with a custom command 00:02:59.976 [492/738] Generating lib/rte_port_mingw with a custom command 00:02:59.976 [493/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:59.976 [494/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:59.976 [495/738] Generating lib/rte_pdump_def with a custom command 00:02:59.976 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:03:00.233 [497/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:00.233 [498/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:00.233 [499/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:00.233 [500/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:00.233 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:00.233 [502/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:00.233 [503/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:00.494 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:00.494 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:00.494 [506/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:00.494 [507/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:00.752 [508/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:00.752 [509/738] Linking static target lib/librte_port.a 00:03:00.752 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:00.752 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:00.752 [512/738] Linking static target lib/librte_pdump.a 00:03:01.010 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.010 [514/738] Linking target lib/librte_pdump.so.23.0 00:03:01.010 [515/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:01.010 [516/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.010 [517/738] Linking target lib/librte_port.so.23.0 00:03:01.010 [518/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:01.010 [519/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:01.010 [520/738] Generating lib/rte_table_def with a custom command 00:03:01.010 [521/738] Generating lib/rte_table_mingw with a custom command 00:03:01.273 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:01.273 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:01.273 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:01.273 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:01.273 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:01.273 [527/738] Generating lib/rte_pipeline_def with a custom command 00:03:01.273 [528/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:01.273 [529/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:01.273 [530/738] Linking static target lib/librte_table.a 00:03:01.533 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:01.533 [532/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.533 [533/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:01.791 [534/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:01.791 [535/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.791 [536/738] Linking target lib/librte_table.so.23.0 00:03:01.791 [537/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:01.791 [538/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:01.791 [539/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:01.791 [540/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:01.791 [541/738] Generating lib/rte_graph_def with a custom command 00:03:01.791 [542/738] Generating lib/rte_graph_mingw with a custom command 00:03:02.050 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:02.050 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:02.050 [545/738] Linking static target lib/librte_graph.a 00:03:02.050 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:02.309 [547/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:02.309 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:02.309 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:02.309 [550/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:02.566 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:02.566 [552/738] Generating lib/rte_node_def with a custom command 00:03:02.566 [553/738] Generating lib/rte_node_mingw with a custom command 00:03:02.566 [554/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:02.567 [555/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:02.567 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.567 [557/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.567 [558/738] Linking target lib/librte_graph.so.23.0 00:03:02.824 [559/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.824 [560/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:02.824 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.824 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:02.824 [563/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:02.824 [564/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:02.824 [565/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:02.824 [566/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.824 [567/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:02.824 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:02.824 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:02.824 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:02.824 [571/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.824 [572/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.824 [573/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.824 [574/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:03.082 [575/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:03.082 [576/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:03.082 [577/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:03.082 [578/738] Linking static target lib/librte_node.a 00:03:03.082 [579/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:03.082 [580/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.082 [581/738] Linking static target drivers/librte_bus_vdev.a 00:03:03.082 [582/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:03.082 [583/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.082 [584/738] Linking static target drivers/librte_bus_pci.a 00:03:03.339 [585/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.339 [586/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.339 [587/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.339 [588/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.339 [589/738] Linking target lib/librte_node.so.23.0 00:03:03.339 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:03.339 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:03.339 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:03.339 [593/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:03.339 [594/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.597 [595/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:03.597 [596/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:03.597 [597/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:03.597 [598/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:03.597 [599/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:03.597 [600/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.597 [601/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.597 [602/738] Linking static target drivers/librte_mempool_ring.a 00:03:03.597 [603/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:03.597 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.597 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:03.854 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:04.110 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:04.110 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:04.367 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:04.625 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:04.625 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:04.625 [612/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:04.901 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:05.197 [614/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:05.197 [615/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:05.197 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:05.197 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:05.197 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:05.197 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:05.763 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:05.763 [621/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:05.763 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:06.020 [623/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:06.020 [624/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:06.020 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:06.020 [626/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:06.020 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:06.020 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:06.278 [629/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:06.278 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:06.535 [631/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:06.535 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:06.535 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:06.535 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:06.535 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:06.792 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:06.792 [637/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:06.792 [638/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.792 [639/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:06.792 [640/738] Linking static target drivers/librte_net_i40e.a 00:03:06.792 [641/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:07.051 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:07.051 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:07.051 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:07.051 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:07.309 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:07.309 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:07.309 [648/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.309 [649/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:07.309 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:07.568 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:07.568 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:07.568 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:07.568 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:07.568 [655/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:07.827 [656/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:07.827 [657/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:07.827 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:07.827 [659/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.827 [660/738] Linking static target lib/librte_vhost.a 00:03:07.827 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:07.827 [662/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:08.086 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:08.086 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:08.344 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:08.344 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:08.603 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:08.603 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:08.861 [669/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.861 [670/738] Linking target lib/librte_vhost.so.23.0 00:03:08.861 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:08.861 [672/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:08.861 [673/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:09.119 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:09.119 [675/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:09.119 [676/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:09.119 [677/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:09.119 [678/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:09.378 [679/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:09.378 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:09.378 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:09.378 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:09.378 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:09.378 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:09.636 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:09.636 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:09.636 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:09.636 [688/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:09.894 [689/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:09.895 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:09.895 [691/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:10.152 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:10.152 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:10.152 [694/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:10.410 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:10.410 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:10.410 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:10.410 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:10.669 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:10.927 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:10.927 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:10.927 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:10.927 [703/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:10.927 [704/738] Linking static target lib/librte_pipeline.a 00:03:10.927 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:11.186 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:11.186 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:11.186 [708/738] Linking target app/dpdk-dumpcap 00:03:11.444 [709/738] Linking target app/dpdk-pdump 00:03:11.444 [710/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:11.444 [711/738] Linking target app/dpdk-proc-info 00:03:11.702 [712/738] Linking target app/dpdk-test-acl 00:03:11.702 [713/738] Linking target app/dpdk-test-bbdev 00:03:11.702 [714/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:11.702 [715/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:11.702 [716/738] Linking target app/dpdk-test-cmdline 00:03:11.702 [717/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:11.702 [718/738] Linking target app/dpdk-test-compress-perf 00:03:11.961 [719/738] Linking target app/dpdk-test-eventdev 00:03:11.961 [720/738] Linking target app/dpdk-test-crypto-perf 00:03:11.961 [721/738] Linking target app/dpdk-test-flow-perf 00:03:11.961 [722/738] Linking target app/dpdk-test-fib 00:03:11.961 [723/738] Linking target app/dpdk-test-gpudev 00:03:11.961 [724/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:11.961 [725/738] Linking target app/dpdk-test-pipeline 00:03:12.219 [726/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:12.219 [727/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:12.219 [728/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:12.477 [729/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:12.477 [730/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:12.477 [731/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:12.477 [732/738] Linking target app/dpdk-test-sad 00:03:12.735 [733/738] Linking target app/dpdk-testpmd 00:03:12.735 [734/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:12.735 [735/738] Linking target app/dpdk-test-regex 00:03:12.993 [736/738] Linking target app/dpdk-test-security-perf 00:03:13.252 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.252 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:13.252 23:37:01 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:13.252 23:37:01 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:13.252 23:37:01 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:13.512 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:13.512 [0/1] Installing files. 00:03:13.781 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:13.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.785 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.785 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.785 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.786 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.786 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.786 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.786 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.786 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.786 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.786 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.786 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.786 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.786 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.786 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.786 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.786 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.047 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.047 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:14.047 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:14.047 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:14.047 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:14.047 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:14.047 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:14.047 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:14.047 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:14.047 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:14.047 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:14.047 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:14.047 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:14.047 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:14.047 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:14.047 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:14.047 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:14.047 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:14.047 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:14.047 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:14.047 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:14.047 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:14.047 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:14.047 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:14.047 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:14.047 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:14.047 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:14.047 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:14.047 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:14.047 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:14.047 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:14.047 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:14.047 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:14.047 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:14.047 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:14.047 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:14.047 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:14.047 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:14.047 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:14.047 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:14.047 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:14.047 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:14.047 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:14.047 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:14.047 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:14.047 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:14.047 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:14.047 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:14.047 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:14.047 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:14.047 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:14.047 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:14.047 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:14.047 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:14.047 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:14.047 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:14.047 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:14.047 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:14.047 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:14.047 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:14.047 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:14.047 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:14.047 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:14.047 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:14.047 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:14.047 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:14.047 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:14.048 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:14.048 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:14.048 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:14.048 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:14.048 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:14.048 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:14.048 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:14.048 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:14.048 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:14.048 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:14.048 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:14.048 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:14.048 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:14.048 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:14.048 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:14.048 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:14.048 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:14.048 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:14.048 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:14.048 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:14.048 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:14.048 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:14.048 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:14.048 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:14.048 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:14.048 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:14.048 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:14.048 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:14.048 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:14.048 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:14.048 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:14.048 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:14.048 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:14.048 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:14.048 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:14.048 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:14.048 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:14.048 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:14.048 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:14.048 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:14.048 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:14.048 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:14.048 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:14.048 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:14.048 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:14.048 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:14.048 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:14.048 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:14.048 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:14.048 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:14.048 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:14.048 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:14.048 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:14.048 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:14.048 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:14.048 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:14.048 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:14.048 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:14.048 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:14.048 23:37:02 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:14.048 23:37:02 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:14.048 00:03:14.048 real 0m32.374s 00:03:14.048 user 3m36.339s 00:03:14.048 sys 0m36.039s 00:03:14.048 23:37:02 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:14.048 23:37:02 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:14.048 ************************************ 00:03:14.048 END TEST build_native_dpdk 00:03:14.048 ************************************ 00:03:14.048 23:37:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.048 23:37:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.048 23:37:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.048 23:37:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.048 23:37:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.048 23:37:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.048 23:37:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.048 23:37:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:14.305 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:14.305 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.305 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:14.305 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:14.563 Using 'verbs' RDMA provider 00:03:25.509 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:37.746 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:37.746 Creating mk/config.mk...done. 00:03:37.746 Creating mk/cc.flags.mk...done. 00:03:37.746 Type 'make' to build. 00:03:37.746 23:37:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:37.746 23:37:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:37.746 23:37:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:37.746 23:37:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.746 ************************************ 00:03:37.746 START TEST make 00:03:37.746 ************************************ 00:03:37.746 23:37:24 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:37.746 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:37.746 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:37.746 meson setup builddir \ 00:03:37.746 -Dwith-libaio=enabled \ 00:03:37.746 -Dwith-liburing=enabled \ 00:03:37.746 -Dwith-libvfn=disabled \ 00:03:37.746 -Dwith-spdk=disabled \ 00:03:37.746 -Dexamples=false \ 00:03:37.746 -Dtests=false \ 00:03:37.746 -Dtools=false && \ 00:03:37.746 meson compile -C builddir && \ 00:03:37.746 cd -) 00:03:37.746 make[1]: Nothing to be done for 'all'. 00:03:38.682 The Meson build system 00:03:38.682 Version: 1.5.0 00:03:38.682 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:38.682 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:38.682 Build type: native build 00:03:38.682 Project name: xnvme 00:03:38.682 Project version: 0.7.5 00:03:38.682 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:38.682 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:38.682 Host machine cpu family: x86_64 00:03:38.682 Host machine cpu: x86_64 00:03:38.682 Message: host_machine.system: linux 00:03:38.682 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:38.682 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:38.682 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:38.682 Run-time dependency threads found: YES 00:03:38.682 Has header "setupapi.h" : NO 00:03:38.682 Has header "linux/blkzoned.h" : YES 00:03:38.682 Has header "linux/blkzoned.h" : YES (cached) 00:03:38.682 Has header "libaio.h" : YES 00:03:38.682 Library aio found: YES 00:03:38.682 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:38.682 Run-time dependency liburing found: YES 2.2 00:03:38.682 Dependency libvfn skipped: feature with-libvfn disabled 00:03:38.682 Found CMake: /usr/bin/cmake (3.27.7) 00:03:38.682 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:38.682 Subproject spdk : skipped: feature with-spdk disabled 00:03:38.682 Run-time dependency appleframeworks found: NO (tried framework) 00:03:38.682 Run-time dependency appleframeworks found: NO (tried framework) 00:03:38.682 Library rt found: YES 00:03:38.682 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:38.682 Configuring xnvme_config.h using configuration 00:03:38.682 Configuring xnvme.spec using configuration 00:03:38.682 Run-time dependency bash-completion found: YES 2.11 00:03:38.682 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:38.682 Program cp found: YES (/usr/bin/cp) 00:03:38.682 Build targets in project: 3 00:03:38.682 00:03:38.682 xnvme 0.7.5 00:03:38.682 00:03:38.682 Subprojects 00:03:38.682 spdk : NO Feature 'with-spdk' disabled 00:03:38.682 00:03:38.682 User defined options 00:03:38.682 examples : false 00:03:38.682 tests : false 00:03:38.682 tools : false 00:03:38.682 with-libaio : enabled 00:03:38.682 with-liburing: enabled 00:03:38.682 with-libvfn : disabled 00:03:38.682 with-spdk : disabled 00:03:38.682 00:03:38.682 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:38.682 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:38.941 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:38.941 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:38.941 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:38.941 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:38.941 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:38.941 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:38.941 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:38.941 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:38.941 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:38.941 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:38.941 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:38.941 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:38.941 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:38.941 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:38.941 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:38.941 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:38.941 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:38.941 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:38.941 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:38.941 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:38.941 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:38.941 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:38.941 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:38.941 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:39.199 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:39.199 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:39.199 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:39.199 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:39.199 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:39.199 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:39.199 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:39.199 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:39.199 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:39.199 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:39.199 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:39.199 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:39.199 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:39.199 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:39.199 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:39.199 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:39.199 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:39.199 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:39.199 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:39.199 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:39.199 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:39.199 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:39.199 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:39.199 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:39.199 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:39.200 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:39.200 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:39.200 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:39.200 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:39.200 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:39.200 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:39.200 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:39.200 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:39.200 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:39.200 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:39.200 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:39.200 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:39.457 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:39.457 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:39.457 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:39.457 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:39.457 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:39.458 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:39.458 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:39.458 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:39.458 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:39.458 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:39.458 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:39.458 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:39.716 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:39.716 [75/76] Linking static target lib/libxnvme.a 00:03:39.716 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:39.716 INFO: autodetecting backend as ninja 00:03:39.716 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:39.716 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:11.827 CC lib/log/log.o 00:04:11.827 CC lib/log/log_flags.o 00:04:11.827 CC lib/log/log_deprecated.o 00:04:11.827 CC lib/ut_mock/mock.o 00:04:11.827 CC lib/ut/ut.o 00:04:11.827 LIB libspdk_ut_mock.a 00:04:11.827 LIB libspdk_log.a 00:04:11.827 SO libspdk_ut_mock.so.6.0 00:04:11.827 LIB libspdk_ut.a 00:04:11.827 SO libspdk_log.so.7.1 00:04:11.827 SO libspdk_ut.so.2.0 00:04:11.827 SYMLINK libspdk_ut_mock.so 00:04:11.827 SYMLINK libspdk_ut.so 00:04:11.827 SYMLINK libspdk_log.so 00:04:11.827 CC lib/util/base64.o 00:04:11.827 CC lib/util/bit_array.o 00:04:11.827 CC lib/util/cpuset.o 00:04:11.827 CC lib/dma/dma.o 00:04:11.827 CC lib/util/crc16.o 00:04:11.827 CC lib/util/crc32.o 00:04:11.827 CC lib/util/crc32c.o 00:04:11.827 CC lib/ioat/ioat.o 00:04:11.827 CXX lib/trace_parser/trace.o 00:04:11.827 CC lib/vfio_user/host/vfio_user_pci.o 00:04:11.827 CC lib/vfio_user/host/vfio_user.o 00:04:11.827 CC lib/util/crc32_ieee.o 00:04:11.827 CC lib/util/crc64.o 00:04:11.827 CC lib/util/dif.o 00:04:11.827 CC lib/util/fd.o 00:04:11.827 LIB libspdk_dma.a 00:04:11.827 CC lib/util/fd_group.o 00:04:11.827 SO libspdk_dma.so.5.0 00:04:11.827 CC lib/util/file.o 00:04:11.827 CC lib/util/hexlify.o 00:04:11.827 SYMLINK libspdk_dma.so 00:04:11.827 CC lib/util/iov.o 00:04:11.827 CC lib/util/math.o 00:04:11.827 LIB libspdk_ioat.a 00:04:11.827 CC lib/util/net.o 00:04:11.827 SO libspdk_ioat.so.7.0 00:04:11.827 LIB libspdk_vfio_user.a 00:04:11.827 CC lib/util/pipe.o 00:04:11.827 SYMLINK libspdk_ioat.so 00:04:11.827 CC lib/util/strerror_tls.o 00:04:11.827 CC lib/util/string.o 00:04:11.827 SO libspdk_vfio_user.so.5.0 00:04:12.086 CC lib/util/uuid.o 00:04:12.086 SYMLINK libspdk_vfio_user.so 00:04:12.086 CC lib/util/xor.o 00:04:12.086 CC lib/util/zipf.o 00:04:12.086 CC lib/util/md5.o 00:04:12.350 LIB libspdk_util.a 00:04:12.350 SO libspdk_util.so.10.1 00:04:12.612 LIB libspdk_trace_parser.a 00:04:12.612 SYMLINK libspdk_util.so 00:04:12.612 SO libspdk_trace_parser.so.6.0 00:04:12.612 SYMLINK libspdk_trace_parser.so 00:04:12.612 CC lib/idxd/idxd.o 00:04:12.612 CC lib/idxd/idxd_kernel.o 00:04:12.612 CC lib/idxd/idxd_user.o 00:04:12.612 CC lib/json/json_parse.o 00:04:12.612 CC lib/json/json_write.o 00:04:12.612 CC lib/json/json_util.o 00:04:12.612 CC lib/rdma_utils/rdma_utils.o 00:04:12.612 CC lib/vmd/vmd.o 00:04:12.612 CC lib/env_dpdk/env.o 00:04:12.612 CC lib/conf/conf.o 00:04:12.870 CC lib/env_dpdk/memory.o 00:04:12.870 LIB libspdk_conf.a 00:04:12.870 CC lib/env_dpdk/pci.o 00:04:12.870 SO libspdk_conf.so.6.0 00:04:12.870 LIB libspdk_rdma_utils.a 00:04:12.870 CC lib/env_dpdk/init.o 00:04:12.870 CC lib/env_dpdk/threads.o 00:04:12.870 SYMLINK libspdk_conf.so 00:04:12.870 SO libspdk_rdma_utils.so.1.0 00:04:13.127 LIB libspdk_json.a 00:04:13.127 CC lib/vmd/led.o 00:04:13.127 SO libspdk_json.so.6.0 00:04:13.127 SYMLINK libspdk_rdma_utils.so 00:04:13.127 SYMLINK libspdk_json.so 00:04:13.127 CC lib/env_dpdk/pci_ioat.o 00:04:13.127 CC lib/env_dpdk/pci_virtio.o 00:04:13.127 CC lib/env_dpdk/pci_vmd.o 00:04:13.127 CC lib/rdma_provider/common.o 00:04:13.127 CC lib/env_dpdk/pci_idxd.o 00:04:13.127 CC lib/env_dpdk/pci_event.o 00:04:13.127 CC lib/env_dpdk/sigbus_handler.o 00:04:13.386 CC lib/env_dpdk/pci_dpdk.o 00:04:13.386 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:13.386 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:13.386 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:13.386 LIB libspdk_idxd.a 00:04:13.386 SO libspdk_idxd.so.12.1 00:04:13.386 LIB libspdk_vmd.a 00:04:13.386 SO libspdk_vmd.so.6.0 00:04:13.386 SYMLINK libspdk_idxd.so 00:04:13.386 CC lib/jsonrpc/jsonrpc_server.o 00:04:13.386 CC lib/jsonrpc/jsonrpc_client.o 00:04:13.386 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:13.386 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:13.386 SYMLINK libspdk_vmd.so 00:04:13.644 LIB libspdk_rdma_provider.a 00:04:13.644 SO libspdk_rdma_provider.so.7.0 00:04:13.644 SYMLINK libspdk_rdma_provider.so 00:04:13.644 LIB libspdk_jsonrpc.a 00:04:13.644 SO libspdk_jsonrpc.so.6.0 00:04:13.903 SYMLINK libspdk_jsonrpc.so 00:04:14.159 CC lib/rpc/rpc.o 00:04:14.159 LIB libspdk_env_dpdk.a 00:04:14.159 SO libspdk_env_dpdk.so.15.1 00:04:14.159 LIB libspdk_rpc.a 00:04:14.159 SO libspdk_rpc.so.6.0 00:04:14.416 SYMLINK libspdk_rpc.so 00:04:14.416 SYMLINK libspdk_env_dpdk.so 00:04:14.416 CC lib/trace/trace_flags.o 00:04:14.416 CC lib/trace/trace.o 00:04:14.416 CC lib/trace/trace_rpc.o 00:04:14.416 CC lib/notify/notify.o 00:04:14.416 CC lib/notify/notify_rpc.o 00:04:14.416 CC lib/keyring/keyring.o 00:04:14.416 CC lib/keyring/keyring_rpc.o 00:04:14.675 LIB libspdk_notify.a 00:04:14.675 SO libspdk_notify.so.6.0 00:04:14.675 LIB libspdk_keyring.a 00:04:14.675 SYMLINK libspdk_notify.so 00:04:14.675 LIB libspdk_trace.a 00:04:14.675 SO libspdk_keyring.so.2.0 00:04:14.675 SO libspdk_trace.so.11.0 00:04:14.933 SYMLINK libspdk_keyring.so 00:04:14.933 SYMLINK libspdk_trace.so 00:04:14.933 CC lib/sock/sock.o 00:04:14.933 CC lib/sock/sock_rpc.o 00:04:14.933 CC lib/thread/iobuf.o 00:04:14.933 CC lib/thread/thread.o 00:04:15.567 LIB libspdk_sock.a 00:04:15.567 SO libspdk_sock.so.10.0 00:04:15.567 SYMLINK libspdk_sock.so 00:04:15.826 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:15.826 CC lib/nvme/nvme_ns.o 00:04:15.826 CC lib/nvme/nvme_fabric.o 00:04:15.826 CC lib/nvme/nvme_ns_cmd.o 00:04:15.826 CC lib/nvme/nvme_qpair.o 00:04:15.826 CC lib/nvme/nvme_ctrlr.o 00:04:15.826 CC lib/nvme/nvme_pcie.o 00:04:15.826 CC lib/nvme/nvme_pcie_common.o 00:04:15.826 CC lib/nvme/nvme.o 00:04:16.392 CC lib/nvme/nvme_quirks.o 00:04:16.392 CC lib/nvme/nvme_transport.o 00:04:16.392 CC lib/nvme/nvme_discovery.o 00:04:16.392 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:16.650 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:16.650 CC lib/nvme/nvme_tcp.o 00:04:16.650 LIB libspdk_thread.a 00:04:16.650 SO libspdk_thread.so.11.0 00:04:16.650 CC lib/nvme/nvme_opal.o 00:04:16.650 SYMLINK libspdk_thread.so 00:04:16.650 CC lib/nvme/nvme_io_msg.o 00:04:16.650 CC lib/nvme/nvme_poll_group.o 00:04:16.908 CC lib/nvme/nvme_zns.o 00:04:16.908 CC lib/nvme/nvme_stubs.o 00:04:17.166 CC lib/accel/accel.o 00:04:17.166 CC lib/accel/accel_rpc.o 00:04:17.166 CC lib/nvme/nvme_auth.o 00:04:17.166 CC lib/nvme/nvme_cuse.o 00:04:17.166 CC lib/blob/blobstore.o 00:04:17.166 CC lib/blob/request.o 00:04:17.424 CC lib/blob/zeroes.o 00:04:17.424 CC lib/accel/accel_sw.o 00:04:17.424 CC lib/blob/blob_bs_dev.o 00:04:17.424 CC lib/nvme/nvme_rdma.o 00:04:17.682 CC lib/init/json_config.o 00:04:17.682 CC lib/init/subsystem.o 00:04:17.682 CC lib/virtio/virtio.o 00:04:17.682 CC lib/fsdev/fsdev.o 00:04:17.940 CC lib/fsdev/fsdev_io.o 00:04:17.940 CC lib/init/subsystem_rpc.o 00:04:17.940 CC lib/init/rpc.o 00:04:17.940 CC lib/virtio/virtio_vhost_user.o 00:04:17.940 CC lib/fsdev/fsdev_rpc.o 00:04:17.940 CC lib/virtio/virtio_vfio_user.o 00:04:17.940 CC lib/virtio/virtio_pci.o 00:04:17.940 LIB libspdk_init.a 00:04:18.198 SO libspdk_init.so.6.0 00:04:18.198 SYMLINK libspdk_init.so 00:04:18.198 LIB libspdk_accel.a 00:04:18.198 SO libspdk_accel.so.16.0 00:04:18.198 SYMLINK libspdk_accel.so 00:04:18.198 LIB libspdk_fsdev.a 00:04:18.198 CC lib/event/reactor.o 00:04:18.198 CC lib/event/app.o 00:04:18.198 CC lib/event/app_rpc.o 00:04:18.198 CC lib/event/log_rpc.o 00:04:18.198 CC lib/event/scheduler_static.o 00:04:18.457 SO libspdk_fsdev.so.2.0 00:04:18.457 LIB libspdk_virtio.a 00:04:18.457 SO libspdk_virtio.so.7.0 00:04:18.457 SYMLINK libspdk_fsdev.so 00:04:18.457 CC lib/bdev/bdev.o 00:04:18.457 CC lib/bdev/bdev_rpc.o 00:04:18.457 CC lib/bdev/bdev_zone.o 00:04:18.457 SYMLINK libspdk_virtio.so 00:04:18.457 CC lib/bdev/part.o 00:04:18.457 CC lib/bdev/scsi_nvme.o 00:04:18.457 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:18.715 LIB libspdk_event.a 00:04:18.715 SO libspdk_event.so.14.0 00:04:18.973 SYMLINK libspdk_event.so 00:04:18.973 LIB libspdk_nvme.a 00:04:18.973 SO libspdk_nvme.so.15.0 00:04:19.232 LIB libspdk_fuse_dispatcher.a 00:04:19.232 SO libspdk_fuse_dispatcher.so.1.0 00:04:19.232 SYMLINK libspdk_fuse_dispatcher.so 00:04:19.232 SYMLINK libspdk_nvme.so 00:04:20.181 LIB libspdk_blob.a 00:04:20.181 SO libspdk_blob.so.12.0 00:04:20.441 SYMLINK libspdk_blob.so 00:04:20.701 CC lib/lvol/lvol.o 00:04:20.701 CC lib/blobfs/blobfs.o 00:04:20.701 CC lib/blobfs/tree.o 00:04:20.701 LIB libspdk_bdev.a 00:04:20.701 SO libspdk_bdev.so.17.0 00:04:20.960 SYMLINK libspdk_bdev.so 00:04:20.960 CC lib/ublk/ublk.o 00:04:20.960 CC lib/ublk/ublk_rpc.o 00:04:20.960 CC lib/nvmf/ctrlr.o 00:04:20.960 CC lib/nvmf/ctrlr_bdev.o 00:04:20.960 CC lib/nvmf/ctrlr_discovery.o 00:04:20.960 CC lib/nbd/nbd.o 00:04:20.960 CC lib/ftl/ftl_core.o 00:04:20.960 CC lib/scsi/dev.o 00:04:21.227 CC lib/scsi/lun.o 00:04:21.227 CC lib/scsi/port.o 00:04:21.227 CC lib/scsi/scsi.o 00:04:21.486 CC lib/nvmf/subsystem.o 00:04:21.486 CC lib/ftl/ftl_init.o 00:04:21.486 LIB libspdk_blobfs.a 00:04:21.486 SO libspdk_blobfs.so.11.0 00:04:21.486 CC lib/nbd/nbd_rpc.o 00:04:21.486 CC lib/scsi/scsi_bdev.o 00:04:21.486 CC lib/nvmf/nvmf.o 00:04:21.486 SYMLINK libspdk_blobfs.so 00:04:21.486 CC lib/ftl/ftl_layout.o 00:04:21.486 LIB libspdk_lvol.a 00:04:21.486 LIB libspdk_ublk.a 00:04:21.486 SO libspdk_lvol.so.11.0 00:04:21.486 SO libspdk_ublk.so.3.0 00:04:21.486 CC lib/nvmf/nvmf_rpc.o 00:04:21.743 CC lib/nvmf/transport.o 00:04:21.743 LIB libspdk_nbd.a 00:04:21.743 SYMLINK libspdk_lvol.so 00:04:21.743 CC lib/nvmf/tcp.o 00:04:21.743 SYMLINK libspdk_ublk.so 00:04:21.743 CC lib/nvmf/stubs.o 00:04:21.743 SO libspdk_nbd.so.7.0 00:04:21.743 SYMLINK libspdk_nbd.so 00:04:21.743 CC lib/nvmf/mdns_server.o 00:04:21.743 CC lib/ftl/ftl_debug.o 00:04:22.001 CC lib/ftl/ftl_io.o 00:04:22.001 CC lib/scsi/scsi_pr.o 00:04:22.001 CC lib/nvmf/rdma.o 00:04:22.260 CC lib/ftl/ftl_sb.o 00:04:22.260 CC lib/nvmf/auth.o 00:04:22.260 CC lib/scsi/scsi_rpc.o 00:04:22.260 CC lib/ftl/ftl_l2p.o 00:04:22.260 CC lib/scsi/task.o 00:04:22.260 CC lib/ftl/ftl_l2p_flat.o 00:04:22.260 CC lib/ftl/ftl_nv_cache.o 00:04:22.260 CC lib/ftl/ftl_band.o 00:04:22.519 CC lib/ftl/ftl_band_ops.o 00:04:22.519 CC lib/ftl/ftl_writer.o 00:04:22.519 LIB libspdk_scsi.a 00:04:22.519 SO libspdk_scsi.so.9.0 00:04:22.519 CC lib/ftl/ftl_rq.o 00:04:22.519 SYMLINK libspdk_scsi.so 00:04:22.519 CC lib/ftl/ftl_reloc.o 00:04:22.777 CC lib/ftl/ftl_l2p_cache.o 00:04:22.777 CC lib/ftl/ftl_p2l.o 00:04:22.777 CC lib/ftl/ftl_p2l_log.o 00:04:22.777 CC lib/iscsi/conn.o 00:04:22.777 CC lib/iscsi/init_grp.o 00:04:22.777 CC lib/vhost/vhost.o 00:04:23.035 CC lib/vhost/vhost_rpc.o 00:04:23.035 CC lib/vhost/vhost_scsi.o 00:04:23.035 CC lib/vhost/vhost_blk.o 00:04:23.035 CC lib/vhost/rte_vhost_user.o 00:04:23.035 CC lib/iscsi/iscsi.o 00:04:23.293 CC lib/iscsi/param.o 00:04:23.293 CC lib/ftl/mngt/ftl_mngt.o 00:04:23.293 CC lib/iscsi/portal_grp.o 00:04:23.293 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:23.551 CC lib/iscsi/tgt_node.o 00:04:23.551 CC lib/iscsi/iscsi_subsystem.o 00:04:23.551 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:23.551 CC lib/iscsi/iscsi_rpc.o 00:04:23.551 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:23.810 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:23.810 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:23.810 CC lib/iscsi/task.o 00:04:23.810 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:23.810 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:23.810 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:23.810 LIB libspdk_nvmf.a 00:04:24.068 LIB libspdk_vhost.a 00:04:24.068 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:24.068 SO libspdk_vhost.so.8.0 00:04:24.068 SO libspdk_nvmf.so.20.0 00:04:24.068 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:24.068 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:24.068 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:24.068 CC lib/ftl/utils/ftl_conf.o 00:04:24.068 CC lib/ftl/utils/ftl_md.o 00:04:24.068 SYMLINK libspdk_vhost.so 00:04:24.069 CC lib/ftl/utils/ftl_mempool.o 00:04:24.069 CC lib/ftl/utils/ftl_bitmap.o 00:04:24.069 CC lib/ftl/utils/ftl_property.o 00:04:24.069 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:24.069 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:24.069 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:24.069 LIB libspdk_iscsi.a 00:04:24.327 SYMLINK libspdk_nvmf.so 00:04:24.327 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:24.327 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:24.327 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:24.327 SO libspdk_iscsi.so.8.0 00:04:24.327 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:24.327 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:24.327 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:24.327 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:24.327 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:24.327 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:24.327 SYMLINK libspdk_iscsi.so 00:04:24.327 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:24.327 CC lib/ftl/base/ftl_base_dev.o 00:04:24.327 CC lib/ftl/base/ftl_base_bdev.o 00:04:24.586 CC lib/ftl/ftl_trace.o 00:04:24.586 LIB libspdk_ftl.a 00:04:24.844 SO libspdk_ftl.so.9.0 00:04:25.103 SYMLINK libspdk_ftl.so 00:04:25.360 CC module/env_dpdk/env_dpdk_rpc.o 00:04:25.360 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:25.360 CC module/accel/error/accel_error.o 00:04:25.360 CC module/keyring/file/keyring.o 00:04:25.360 CC module/fsdev/aio/fsdev_aio.o 00:04:25.360 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:25.360 CC module/scheduler/gscheduler/gscheduler.o 00:04:25.360 CC module/keyring/linux/keyring.o 00:04:25.360 CC module/sock/posix/posix.o 00:04:25.360 CC module/blob/bdev/blob_bdev.o 00:04:25.360 LIB libspdk_env_dpdk_rpc.a 00:04:25.360 SO libspdk_env_dpdk_rpc.so.6.0 00:04:25.360 CC module/keyring/file/keyring_rpc.o 00:04:25.360 LIB libspdk_scheduler_gscheduler.a 00:04:25.360 SYMLINK libspdk_env_dpdk_rpc.so 00:04:25.360 CC module/keyring/linux/keyring_rpc.o 00:04:25.360 SO libspdk_scheduler_gscheduler.so.4.0 00:04:25.360 LIB libspdk_scheduler_dpdk_governor.a 00:04:25.360 CC module/accel/error/accel_error_rpc.o 00:04:25.617 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:25.617 LIB libspdk_scheduler_dynamic.a 00:04:25.617 SYMLINK libspdk_scheduler_gscheduler.so 00:04:25.617 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:25.617 SO libspdk_scheduler_dynamic.so.4.0 00:04:25.617 CC module/fsdev/aio/linux_aio_mgr.o 00:04:25.617 LIB libspdk_keyring_file.a 00:04:25.617 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:25.617 SYMLINK libspdk_scheduler_dynamic.so 00:04:25.617 LIB libspdk_blob_bdev.a 00:04:25.617 SO libspdk_keyring_file.so.2.0 00:04:25.617 SO libspdk_blob_bdev.so.12.0 00:04:25.617 LIB libspdk_keyring_linux.a 00:04:25.617 LIB libspdk_accel_error.a 00:04:25.617 SO libspdk_keyring_linux.so.1.0 00:04:25.617 SO libspdk_accel_error.so.2.0 00:04:25.617 SYMLINK libspdk_blob_bdev.so 00:04:25.617 SYMLINK libspdk_keyring_file.so 00:04:25.618 SYMLINK libspdk_keyring_linux.so 00:04:25.618 SYMLINK libspdk_accel_error.so 00:04:25.618 CC module/accel/ioat/accel_ioat.o 00:04:25.618 CC module/accel/ioat/accel_ioat_rpc.o 00:04:25.618 CC module/accel/dsa/accel_dsa.o 00:04:25.618 CC module/accel/dsa/accel_dsa_rpc.o 00:04:25.876 CC module/accel/iaa/accel_iaa.o 00:04:25.876 CC module/bdev/delay/vbdev_delay.o 00:04:25.876 LIB libspdk_accel_ioat.a 00:04:25.876 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:25.876 CC module/bdev/error/vbdev_error.o 00:04:25.876 CC module/blobfs/bdev/blobfs_bdev.o 00:04:25.876 SO libspdk_accel_ioat.so.6.0 00:04:25.876 LIB libspdk_fsdev_aio.a 00:04:25.876 SO libspdk_fsdev_aio.so.1.0 00:04:25.876 SYMLINK libspdk_accel_ioat.so 00:04:25.876 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:25.876 CC module/accel/iaa/accel_iaa_rpc.o 00:04:25.876 CC module/bdev/gpt/gpt.o 00:04:25.876 LIB libspdk_accel_dsa.a 00:04:25.876 SYMLINK libspdk_fsdev_aio.so 00:04:25.876 SO libspdk_accel_dsa.so.5.0 00:04:26.134 CC module/bdev/error/vbdev_error_rpc.o 00:04:26.134 SYMLINK libspdk_accel_dsa.so 00:04:26.134 CC module/bdev/gpt/vbdev_gpt.o 00:04:26.134 LIB libspdk_accel_iaa.a 00:04:26.134 LIB libspdk_blobfs_bdev.a 00:04:26.134 SO libspdk_accel_iaa.so.3.0 00:04:26.134 SO libspdk_blobfs_bdev.so.6.0 00:04:26.134 LIB libspdk_sock_posix.a 00:04:26.134 CC module/bdev/lvol/vbdev_lvol.o 00:04:26.134 SYMLINK libspdk_accel_iaa.so 00:04:26.134 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:26.134 SYMLINK libspdk_blobfs_bdev.so 00:04:26.134 SO libspdk_sock_posix.so.6.0 00:04:26.134 LIB libspdk_bdev_error.a 00:04:26.134 CC module/bdev/malloc/bdev_malloc.o 00:04:26.134 SO libspdk_bdev_error.so.6.0 00:04:26.134 SYMLINK libspdk_sock_posix.so 00:04:26.134 LIB libspdk_bdev_delay.a 00:04:26.134 CC module/bdev/null/bdev_null.o 00:04:26.134 CC module/bdev/nvme/bdev_nvme.o 00:04:26.134 SO libspdk_bdev_delay.so.6.0 00:04:26.134 LIB libspdk_bdev_gpt.a 00:04:26.134 SYMLINK libspdk_bdev_error.so 00:04:26.134 CC module/bdev/passthru/vbdev_passthru.o 00:04:26.392 SO libspdk_bdev_gpt.so.6.0 00:04:26.392 SYMLINK libspdk_bdev_delay.so 00:04:26.392 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:26.392 SYMLINK libspdk_bdev_gpt.so 00:04:26.392 CC module/bdev/raid/bdev_raid.o 00:04:26.392 CC module/bdev/split/vbdev_split.o 00:04:26.392 CC module/bdev/null/bdev_null_rpc.o 00:04:26.392 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:26.392 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:26.392 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:26.392 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:26.392 LIB libspdk_bdev_lvol.a 00:04:26.392 SO libspdk_bdev_lvol.so.6.0 00:04:26.649 CC module/bdev/split/vbdev_split_rpc.o 00:04:26.649 LIB libspdk_bdev_null.a 00:04:26.649 SYMLINK libspdk_bdev_lvol.so 00:04:26.649 CC module/bdev/nvme/nvme_rpc.o 00:04:26.649 SO libspdk_bdev_null.so.6.0 00:04:26.650 LIB libspdk_bdev_passthru.a 00:04:26.650 SO libspdk_bdev_passthru.so.6.0 00:04:26.650 LIB libspdk_bdev_malloc.a 00:04:26.650 SYMLINK libspdk_bdev_null.so 00:04:26.650 CC module/bdev/nvme/bdev_mdns_client.o 00:04:26.650 CC module/bdev/raid/bdev_raid_rpc.o 00:04:26.650 SO libspdk_bdev_malloc.so.6.0 00:04:26.650 SYMLINK libspdk_bdev_passthru.so 00:04:26.650 LIB libspdk_bdev_split.a 00:04:26.650 CC module/bdev/raid/bdev_raid_sb.o 00:04:26.650 SO libspdk_bdev_split.so.6.0 00:04:26.650 SYMLINK libspdk_bdev_malloc.so 00:04:26.650 LIB libspdk_bdev_zone_block.a 00:04:26.650 SO libspdk_bdev_zone_block.so.6.0 00:04:26.650 SYMLINK libspdk_bdev_split.so 00:04:26.650 CC module/bdev/nvme/vbdev_opal.o 00:04:26.650 CC module/bdev/raid/raid0.o 00:04:26.907 SYMLINK libspdk_bdev_zone_block.so 00:04:26.907 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:26.907 CC module/bdev/raid/raid1.o 00:04:26.907 CC module/bdev/aio/bdev_aio.o 00:04:26.907 CC module/bdev/xnvme/bdev_xnvme.o 00:04:26.907 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:26.907 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:26.907 CC module/bdev/aio/bdev_aio_rpc.o 00:04:26.907 CC module/bdev/ftl/bdev_ftl.o 00:04:26.907 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:27.165 CC module/bdev/raid/concat.o 00:04:27.165 LIB libspdk_bdev_aio.a 00:04:27.165 CC module/bdev/iscsi/bdev_iscsi.o 00:04:27.165 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:27.165 LIB libspdk_bdev_xnvme.a 00:04:27.165 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:27.165 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:27.165 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:27.165 SO libspdk_bdev_aio.so.6.0 00:04:27.165 SO libspdk_bdev_xnvme.so.3.0 00:04:27.165 LIB libspdk_bdev_ftl.a 00:04:27.165 SO libspdk_bdev_ftl.so.6.0 00:04:27.165 SYMLINK libspdk_bdev_aio.so 00:04:27.166 SYMLINK libspdk_bdev_xnvme.so 00:04:27.423 SYMLINK libspdk_bdev_ftl.so 00:04:27.423 LIB libspdk_bdev_raid.a 00:04:27.423 SO libspdk_bdev_raid.so.6.0 00:04:27.423 SYMLINK libspdk_bdev_raid.so 00:04:27.423 LIB libspdk_bdev_iscsi.a 00:04:27.683 SO libspdk_bdev_iscsi.so.6.0 00:04:27.683 LIB libspdk_bdev_virtio.a 00:04:27.683 SYMLINK libspdk_bdev_iscsi.so 00:04:27.683 SO libspdk_bdev_virtio.so.6.0 00:04:27.683 SYMLINK libspdk_bdev_virtio.so 00:04:28.656 LIB libspdk_bdev_nvme.a 00:04:28.656 SO libspdk_bdev_nvme.so.7.1 00:04:28.913 SYMLINK libspdk_bdev_nvme.so 00:04:29.171 CC module/event/subsystems/vmd/vmd.o 00:04:29.171 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:29.171 CC module/event/subsystems/scheduler/scheduler.o 00:04:29.171 CC module/event/subsystems/sock/sock.o 00:04:29.171 CC module/event/subsystems/fsdev/fsdev.o 00:04:29.171 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:29.171 CC module/event/subsystems/keyring/keyring.o 00:04:29.171 CC module/event/subsystems/iobuf/iobuf.o 00:04:29.171 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:29.429 LIB libspdk_event_keyring.a 00:04:29.429 LIB libspdk_event_fsdev.a 00:04:29.429 LIB libspdk_event_sock.a 00:04:29.429 SO libspdk_event_keyring.so.1.0 00:04:29.429 LIB libspdk_event_vhost_blk.a 00:04:29.429 LIB libspdk_event_scheduler.a 00:04:29.429 LIB libspdk_event_vmd.a 00:04:29.429 SO libspdk_event_sock.so.5.0 00:04:29.429 SO libspdk_event_fsdev.so.1.0 00:04:29.429 LIB libspdk_event_iobuf.a 00:04:29.429 SO libspdk_event_scheduler.so.4.0 00:04:29.429 SO libspdk_event_vhost_blk.so.3.0 00:04:29.429 SO libspdk_event_vmd.so.6.0 00:04:29.429 SO libspdk_event_iobuf.so.3.0 00:04:29.429 SYMLINK libspdk_event_keyring.so 00:04:29.429 SYMLINK libspdk_event_sock.so 00:04:29.429 SYMLINK libspdk_event_vhost_blk.so 00:04:29.429 SYMLINK libspdk_event_fsdev.so 00:04:29.429 SYMLINK libspdk_event_scheduler.so 00:04:29.429 SYMLINK libspdk_event_vmd.so 00:04:29.429 SYMLINK libspdk_event_iobuf.so 00:04:29.687 CC module/event/subsystems/accel/accel.o 00:04:29.947 LIB libspdk_event_accel.a 00:04:29.947 SO libspdk_event_accel.so.6.0 00:04:29.947 SYMLINK libspdk_event_accel.so 00:04:30.205 CC module/event/subsystems/bdev/bdev.o 00:04:30.205 LIB libspdk_event_bdev.a 00:04:30.205 SO libspdk_event_bdev.so.6.0 00:04:30.464 SYMLINK libspdk_event_bdev.so 00:04:30.464 CC module/event/subsystems/ublk/ublk.o 00:04:30.464 CC module/event/subsystems/scsi/scsi.o 00:04:30.464 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:30.464 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:30.464 CC module/event/subsystems/nbd/nbd.o 00:04:30.724 LIB libspdk_event_scsi.a 00:04:30.724 LIB libspdk_event_ublk.a 00:04:30.724 LIB libspdk_event_nbd.a 00:04:30.724 SO libspdk_event_ublk.so.3.0 00:04:30.724 SO libspdk_event_scsi.so.6.0 00:04:30.724 SO libspdk_event_nbd.so.6.0 00:04:30.724 SYMLINK libspdk_event_nbd.so 00:04:30.724 SYMLINK libspdk_event_ublk.so 00:04:30.724 SYMLINK libspdk_event_scsi.so 00:04:30.724 LIB libspdk_event_nvmf.a 00:04:30.724 SO libspdk_event_nvmf.so.6.0 00:04:30.724 SYMLINK libspdk_event_nvmf.so 00:04:30.985 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:30.985 CC module/event/subsystems/iscsi/iscsi.o 00:04:30.985 LIB libspdk_event_vhost_scsi.a 00:04:30.985 LIB libspdk_event_iscsi.a 00:04:30.985 SO libspdk_event_vhost_scsi.so.3.0 00:04:31.252 SO libspdk_event_iscsi.so.6.0 00:04:31.252 SYMLINK libspdk_event_vhost_scsi.so 00:04:31.252 SYMLINK libspdk_event_iscsi.so 00:04:31.252 SO libspdk.so.6.0 00:04:31.252 SYMLINK libspdk.so 00:04:31.509 CC app/spdk_lspci/spdk_lspci.o 00:04:31.510 CXX app/trace/trace.o 00:04:31.510 CC app/trace_record/trace_record.o 00:04:31.510 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:31.510 CC app/iscsi_tgt/iscsi_tgt.o 00:04:31.510 CC app/nvmf_tgt/nvmf_main.o 00:04:31.510 CC examples/util/zipf/zipf.o 00:04:31.510 CC examples/ioat/perf/perf.o 00:04:31.510 CC app/spdk_tgt/spdk_tgt.o 00:04:31.510 CC test/thread/poller_perf/poller_perf.o 00:04:31.767 LINK spdk_lspci 00:04:31.767 LINK interrupt_tgt 00:04:31.767 LINK zipf 00:04:31.767 LINK iscsi_tgt 00:04:31.767 LINK nvmf_tgt 00:04:31.767 LINK poller_perf 00:04:31.767 LINK spdk_tgt 00:04:31.767 LINK spdk_trace_record 00:04:31.767 LINK ioat_perf 00:04:31.767 CC examples/ioat/verify/verify.o 00:04:31.767 LINK spdk_trace 00:04:32.026 TEST_HEADER include/spdk/accel.h 00:04:32.026 TEST_HEADER include/spdk/accel_module.h 00:04:32.026 TEST_HEADER include/spdk/assert.h 00:04:32.026 TEST_HEADER include/spdk/barrier.h 00:04:32.026 TEST_HEADER include/spdk/base64.h 00:04:32.026 TEST_HEADER include/spdk/bdev.h 00:04:32.026 CC app/spdk_nvme_perf/perf.o 00:04:32.026 TEST_HEADER include/spdk/bdev_module.h 00:04:32.026 CC app/spdk_nvme_identify/identify.o 00:04:32.026 TEST_HEADER include/spdk/bdev_zone.h 00:04:32.026 TEST_HEADER include/spdk/bit_array.h 00:04:32.026 TEST_HEADER include/spdk/bit_pool.h 00:04:32.026 TEST_HEADER include/spdk/blob_bdev.h 00:04:32.026 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:32.026 TEST_HEADER include/spdk/blobfs.h 00:04:32.026 TEST_HEADER include/spdk/blob.h 00:04:32.026 TEST_HEADER include/spdk/conf.h 00:04:32.026 TEST_HEADER include/spdk/config.h 00:04:32.026 TEST_HEADER include/spdk/cpuset.h 00:04:32.026 TEST_HEADER include/spdk/crc16.h 00:04:32.026 TEST_HEADER include/spdk/crc32.h 00:04:32.026 TEST_HEADER include/spdk/crc64.h 00:04:32.026 TEST_HEADER include/spdk/dif.h 00:04:32.026 CC test/dma/test_dma/test_dma.o 00:04:32.026 TEST_HEADER include/spdk/dma.h 00:04:32.026 TEST_HEADER include/spdk/endian.h 00:04:32.026 TEST_HEADER include/spdk/env_dpdk.h 00:04:32.026 TEST_HEADER include/spdk/env.h 00:04:32.026 CC app/spdk_nvme_discover/discovery_aer.o 00:04:32.026 TEST_HEADER include/spdk/event.h 00:04:32.026 TEST_HEADER include/spdk/fd_group.h 00:04:32.026 TEST_HEADER include/spdk/fd.h 00:04:32.026 TEST_HEADER include/spdk/file.h 00:04:32.026 TEST_HEADER include/spdk/fsdev.h 00:04:32.026 TEST_HEADER include/spdk/fsdev_module.h 00:04:32.026 TEST_HEADER include/spdk/ftl.h 00:04:32.026 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:32.026 TEST_HEADER include/spdk/gpt_spec.h 00:04:32.026 TEST_HEADER include/spdk/hexlify.h 00:04:32.026 TEST_HEADER include/spdk/histogram_data.h 00:04:32.026 TEST_HEADER include/spdk/idxd.h 00:04:32.026 TEST_HEADER include/spdk/idxd_spec.h 00:04:32.026 TEST_HEADER include/spdk/init.h 00:04:32.026 TEST_HEADER include/spdk/ioat.h 00:04:32.026 TEST_HEADER include/spdk/ioat_spec.h 00:04:32.026 CC test/app/bdev_svc/bdev_svc.o 00:04:32.026 TEST_HEADER include/spdk/iscsi_spec.h 00:04:32.026 TEST_HEADER include/spdk/json.h 00:04:32.026 TEST_HEADER include/spdk/jsonrpc.h 00:04:32.026 TEST_HEADER include/spdk/keyring.h 00:04:32.026 TEST_HEADER include/spdk/keyring_module.h 00:04:32.026 TEST_HEADER include/spdk/likely.h 00:04:32.026 TEST_HEADER include/spdk/log.h 00:04:32.026 TEST_HEADER include/spdk/lvol.h 00:04:32.026 TEST_HEADER include/spdk/md5.h 00:04:32.026 TEST_HEADER include/spdk/memory.h 00:04:32.026 TEST_HEADER include/spdk/mmio.h 00:04:32.026 TEST_HEADER include/spdk/nbd.h 00:04:32.026 TEST_HEADER include/spdk/net.h 00:04:32.026 TEST_HEADER include/spdk/notify.h 00:04:32.026 LINK verify 00:04:32.026 TEST_HEADER include/spdk/nvme.h 00:04:32.026 TEST_HEADER include/spdk/nvme_intel.h 00:04:32.026 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:32.026 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:32.026 TEST_HEADER include/spdk/nvme_spec.h 00:04:32.026 TEST_HEADER include/spdk/nvme_zns.h 00:04:32.026 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:32.026 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:32.026 CC examples/sock/hello_world/hello_sock.o 00:04:32.026 TEST_HEADER include/spdk/nvmf.h 00:04:32.026 TEST_HEADER include/spdk/nvmf_spec.h 00:04:32.026 TEST_HEADER include/spdk/nvmf_transport.h 00:04:32.026 TEST_HEADER include/spdk/opal.h 00:04:32.026 TEST_HEADER include/spdk/opal_spec.h 00:04:32.026 TEST_HEADER include/spdk/pci_ids.h 00:04:32.026 TEST_HEADER include/spdk/pipe.h 00:04:32.026 TEST_HEADER include/spdk/queue.h 00:04:32.026 CC app/spdk_top/spdk_top.o 00:04:32.026 TEST_HEADER include/spdk/reduce.h 00:04:32.026 TEST_HEADER include/spdk/rpc.h 00:04:32.026 TEST_HEADER include/spdk/scheduler.h 00:04:32.026 TEST_HEADER include/spdk/scsi.h 00:04:32.026 TEST_HEADER include/spdk/scsi_spec.h 00:04:32.026 TEST_HEADER include/spdk/sock.h 00:04:32.026 TEST_HEADER include/spdk/stdinc.h 00:04:32.026 TEST_HEADER include/spdk/string.h 00:04:32.026 CC examples/thread/thread/thread_ex.o 00:04:32.026 TEST_HEADER include/spdk/thread.h 00:04:32.027 TEST_HEADER include/spdk/trace.h 00:04:32.027 TEST_HEADER include/spdk/trace_parser.h 00:04:32.027 TEST_HEADER include/spdk/tree.h 00:04:32.027 TEST_HEADER include/spdk/ublk.h 00:04:32.027 TEST_HEADER include/spdk/util.h 00:04:32.027 TEST_HEADER include/spdk/uuid.h 00:04:32.027 TEST_HEADER include/spdk/version.h 00:04:32.027 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:32.027 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:32.027 TEST_HEADER include/spdk/vhost.h 00:04:32.027 TEST_HEADER include/spdk/vmd.h 00:04:32.027 TEST_HEADER include/spdk/xor.h 00:04:32.027 TEST_HEADER include/spdk/zipf.h 00:04:32.027 CXX test/cpp_headers/accel.o 00:04:32.285 LINK spdk_nvme_discover 00:04:32.285 LINK bdev_svc 00:04:32.285 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:32.285 CXX test/cpp_headers/accel_module.o 00:04:32.285 LINK hello_sock 00:04:32.285 LINK test_dma 00:04:32.285 CC test/app/histogram_perf/histogram_perf.o 00:04:32.285 LINK thread 00:04:32.543 CC test/app/jsoncat/jsoncat.o 00:04:32.543 CXX test/cpp_headers/assert.o 00:04:32.543 CXX test/cpp_headers/barrier.o 00:04:32.543 LINK histogram_perf 00:04:32.543 LINK jsoncat 00:04:32.543 CC test/app/stub/stub.o 00:04:32.543 CXX test/cpp_headers/base64.o 00:04:32.801 LINK nvme_fuzz 00:04:32.801 LINK stub 00:04:32.801 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.801 CC examples/idxd/perf/perf.o 00:04:32.801 CC app/vhost/vhost.o 00:04:32.801 CC app/spdk_dd/spdk_dd.o 00:04:32.801 CXX test/cpp_headers/bdev.o 00:04:32.801 LINK spdk_nvme_identify 00:04:32.801 LINK spdk_nvme_perf 00:04:32.801 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:32.801 CXX test/cpp_headers/bdev_module.o 00:04:32.801 LINK lsvmd 00:04:32.801 LINK vhost 00:04:32.801 CXX test/cpp_headers/bdev_zone.o 00:04:33.061 CXX test/cpp_headers/bit_array.o 00:04:33.061 CXX test/cpp_headers/bit_pool.o 00:04:33.061 LINK idxd_perf 00:04:33.061 CC examples/vmd/led/led.o 00:04:33.061 CXX test/cpp_headers/blob_bdev.o 00:04:33.061 LINK spdk_top 00:04:33.061 CXX test/cpp_headers/blobfs_bdev.o 00:04:33.061 LINK spdk_dd 00:04:33.061 CXX test/cpp_headers/blobfs.o 00:04:33.061 CXX test/cpp_headers/blob.o 00:04:33.061 CXX test/cpp_headers/conf.o 00:04:33.061 LINK led 00:04:33.318 CXX test/cpp_headers/config.o 00:04:33.318 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:33.318 CC app/fio/nvme/fio_plugin.o 00:04:33.318 CXX test/cpp_headers/cpuset.o 00:04:33.318 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:33.318 CXX test/cpp_headers/crc16.o 00:04:33.318 CC app/fio/bdev/fio_plugin.o 00:04:33.318 CXX test/cpp_headers/crc32.o 00:04:33.318 CC test/event/event_perf/event_perf.o 00:04:33.318 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:33.318 CC test/env/mem_callbacks/mem_callbacks.o 00:04:33.576 CC test/env/vtophys/vtophys.o 00:04:33.576 CC examples/accel/perf/accel_perf.o 00:04:33.576 CXX test/cpp_headers/crc64.o 00:04:33.576 LINK event_perf 00:04:33.576 LINK vtophys 00:04:33.576 LINK mem_callbacks 00:04:33.576 LINK spdk_bdev 00:04:33.576 LINK hello_fsdev 00:04:33.576 LINK vhost_fuzz 00:04:33.576 CXX test/cpp_headers/dif.o 00:04:33.576 CXX test/cpp_headers/dma.o 00:04:33.835 CC test/event/reactor/reactor.o 00:04:33.835 LINK spdk_nvme 00:04:33.835 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:33.835 CXX test/cpp_headers/endian.o 00:04:33.835 CC test/env/memory/memory_ut.o 00:04:33.835 CC test/env/pci/pci_ut.o 00:04:33.835 LINK reactor 00:04:33.835 LINK env_dpdk_post_init 00:04:33.835 CXX test/cpp_headers/env_dpdk.o 00:04:34.094 CC examples/blob/hello_world/hello_blob.o 00:04:34.094 LINK accel_perf 00:04:34.094 CC examples/nvme/hello_world/hello_world.o 00:04:34.094 CC test/nvme/aer/aer.o 00:04:34.094 CC test/event/reactor_perf/reactor_perf.o 00:04:34.094 CXX test/cpp_headers/env.o 00:04:34.094 CC test/rpc_client/rpc_client_test.o 00:04:34.094 LINK reactor_perf 00:04:34.094 CXX test/cpp_headers/event.o 00:04:34.094 LINK hello_blob 00:04:34.094 CC examples/nvme/reconnect/reconnect.o 00:04:34.094 LINK hello_world 00:04:34.352 LINK pci_ut 00:04:34.352 LINK aer 00:04:34.352 LINK rpc_client_test 00:04:34.352 CXX test/cpp_headers/fd_group.o 00:04:34.352 CC test/event/app_repeat/app_repeat.o 00:04:34.352 LINK memory_ut 00:04:34.352 CC test/nvme/reset/reset.o 00:04:34.352 CC examples/blob/cli/blobcli.o 00:04:34.609 CXX test/cpp_headers/fd.o 00:04:34.609 LINK app_repeat 00:04:34.609 LINK iscsi_fuzz 00:04:34.609 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:34.609 CC test/event/scheduler/scheduler.o 00:04:34.609 LINK reconnect 00:04:34.609 CC test/accel/dif/dif.o 00:04:34.609 LINK reset 00:04:34.609 CC test/nvme/sgl/sgl.o 00:04:34.609 CXX test/cpp_headers/file.o 00:04:34.609 CXX test/cpp_headers/fsdev.o 00:04:34.609 CXX test/cpp_headers/fsdev_module.o 00:04:34.609 CXX test/cpp_headers/ftl.o 00:04:34.867 CXX test/cpp_headers/fuse_dispatcher.o 00:04:34.867 LINK scheduler 00:04:34.867 CXX test/cpp_headers/gpt_spec.o 00:04:34.867 CXX test/cpp_headers/hexlify.o 00:04:34.867 LINK sgl 00:04:34.867 CXX test/cpp_headers/histogram_data.o 00:04:34.867 CC examples/bdev/hello_world/hello_bdev.o 00:04:34.867 CC test/nvme/e2edp/nvme_dp.o 00:04:34.867 LINK blobcli 00:04:34.867 CC examples/bdev/bdevperf/bdevperf.o 00:04:35.124 CC test/nvme/overhead/overhead.o 00:04:35.124 CC test/nvme/err_injection/err_injection.o 00:04:35.124 CC test/nvme/startup/startup.o 00:04:35.124 LINK nvme_manage 00:04:35.124 CXX test/cpp_headers/idxd.o 00:04:35.124 LINK startup 00:04:35.124 CXX test/cpp_headers/idxd_spec.o 00:04:35.124 CC examples/nvme/arbitration/arbitration.o 00:04:35.124 LINK err_injection 00:04:35.124 LINK nvme_dp 00:04:35.124 LINK hello_bdev 00:04:35.381 LINK overhead 00:04:35.381 CXX test/cpp_headers/init.o 00:04:35.381 LINK dif 00:04:35.381 CC test/nvme/reserve/reserve.o 00:04:35.381 CC test/nvme/simple_copy/simple_copy.o 00:04:35.381 CXX test/cpp_headers/ioat.o 00:04:35.381 CC test/nvme/connect_stress/connect_stress.o 00:04:35.381 LINK arbitration 00:04:35.638 CC test/blobfs/mkfs/mkfs.o 00:04:35.638 CC test/nvme/boot_partition/boot_partition.o 00:04:35.638 LINK reserve 00:04:35.639 CC test/lvol/esnap/esnap.o 00:04:35.639 CC examples/nvme/hotplug/hotplug.o 00:04:35.639 CXX test/cpp_headers/ioat_spec.o 00:04:35.639 CXX test/cpp_headers/iscsi_spec.o 00:04:35.639 LINK simple_copy 00:04:35.639 LINK connect_stress 00:04:35.639 LINK bdevperf 00:04:35.639 CXX test/cpp_headers/json.o 00:04:35.639 LINK boot_partition 00:04:35.639 LINK mkfs 00:04:35.639 CXX test/cpp_headers/jsonrpc.o 00:04:35.639 LINK hotplug 00:04:35.897 CC test/nvme/compliance/nvme_compliance.o 00:04:35.897 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:35.897 CXX test/cpp_headers/keyring.o 00:04:35.897 CC examples/nvme/abort/abort.o 00:04:35.897 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:35.897 CC test/nvme/fused_ordering/fused_ordering.o 00:04:35.897 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:35.897 CC test/nvme/fdp/fdp.o 00:04:35.897 CC test/bdev/bdevio/bdevio.o 00:04:35.897 LINK cmb_copy 00:04:35.897 CXX test/cpp_headers/keyring_module.o 00:04:36.155 LINK pmr_persistence 00:04:36.155 LINK fused_ordering 00:04:36.155 LINK doorbell_aers 00:04:36.155 CXX test/cpp_headers/likely.o 00:04:36.155 LINK nvme_compliance 00:04:36.155 CXX test/cpp_headers/log.o 00:04:36.155 CC test/nvme/cuse/cuse.o 00:04:36.155 LINK fdp 00:04:36.155 CXX test/cpp_headers/lvol.o 00:04:36.155 CXX test/cpp_headers/md5.o 00:04:36.155 LINK abort 00:04:36.155 CXX test/cpp_headers/memory.o 00:04:36.413 CXX test/cpp_headers/mmio.o 00:04:36.413 CXX test/cpp_headers/nbd.o 00:04:36.413 LINK bdevio 00:04:36.413 CXX test/cpp_headers/net.o 00:04:36.413 CXX test/cpp_headers/notify.o 00:04:36.413 CXX test/cpp_headers/nvme.o 00:04:36.413 CXX test/cpp_headers/nvme_intel.o 00:04:36.413 CXX test/cpp_headers/nvme_ocssd.o 00:04:36.413 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:36.413 CXX test/cpp_headers/nvme_spec.o 00:04:36.413 CXX test/cpp_headers/nvme_zns.o 00:04:36.413 CXX test/cpp_headers/nvmf_cmd.o 00:04:36.670 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:36.670 CC examples/nvmf/nvmf/nvmf.o 00:04:36.670 CXX test/cpp_headers/nvmf.o 00:04:36.670 CXX test/cpp_headers/nvmf_spec.o 00:04:36.670 CXX test/cpp_headers/nvmf_transport.o 00:04:36.670 CXX test/cpp_headers/opal.o 00:04:36.670 CXX test/cpp_headers/opal_spec.o 00:04:36.670 CXX test/cpp_headers/pci_ids.o 00:04:36.670 CXX test/cpp_headers/pipe.o 00:04:36.670 CXX test/cpp_headers/queue.o 00:04:36.670 CXX test/cpp_headers/reduce.o 00:04:36.670 CXX test/cpp_headers/rpc.o 00:04:36.670 CXX test/cpp_headers/scheduler.o 00:04:36.670 CXX test/cpp_headers/scsi.o 00:04:36.670 CXX test/cpp_headers/scsi_spec.o 00:04:36.670 LINK nvmf 00:04:36.927 CXX test/cpp_headers/sock.o 00:04:36.927 CXX test/cpp_headers/stdinc.o 00:04:36.927 CXX test/cpp_headers/string.o 00:04:36.927 CXX test/cpp_headers/thread.o 00:04:36.927 CXX test/cpp_headers/trace.o 00:04:36.927 CXX test/cpp_headers/trace_parser.o 00:04:36.927 CXX test/cpp_headers/tree.o 00:04:36.927 CXX test/cpp_headers/ublk.o 00:04:36.927 CXX test/cpp_headers/util.o 00:04:36.927 CXX test/cpp_headers/uuid.o 00:04:36.927 CXX test/cpp_headers/version.o 00:04:36.927 CXX test/cpp_headers/vfio_user_pci.o 00:04:36.927 CXX test/cpp_headers/vfio_user_spec.o 00:04:36.927 CXX test/cpp_headers/vhost.o 00:04:36.927 CXX test/cpp_headers/vmd.o 00:04:36.927 CXX test/cpp_headers/xor.o 00:04:37.191 CXX test/cpp_headers/zipf.o 00:04:37.191 LINK cuse 00:04:41.379 LINK esnap 00:04:41.379 00:04:41.379 real 1m5.081s 00:04:41.379 user 5m5.884s 00:04:41.379 sys 0m54.424s 00:04:41.379 23:38:29 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:41.379 23:38:29 make -- common/autotest_common.sh@10 -- $ set +x 00:04:41.379 ************************************ 00:04:41.379 END TEST make 00:04:41.379 ************************************ 00:04:41.379 23:38:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:41.379 23:38:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:41.379 23:38:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:41.379 23:38:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.379 23:38:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:41.379 23:38:29 -- pm/common@44 -- $ pid=5815 00:04:41.379 23:38:29 -- pm/common@50 -- $ kill -TERM 5815 00:04:41.379 23:38:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.379 23:38:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:41.379 23:38:29 -- pm/common@44 -- $ pid=5816 00:04:41.379 23:38:29 -- pm/common@50 -- $ kill -TERM 5816 00:04:41.379 23:38:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:41.379 23:38:29 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:41.640 23:38:29 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.640 23:38:29 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.640 23:38:29 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.640 23:38:29 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.640 23:38:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.640 23:38:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.640 23:38:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.640 23:38:29 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.640 23:38:29 -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.640 23:38:29 -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.640 23:38:29 -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.640 23:38:29 -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.640 23:38:29 -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.640 23:38:29 -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.640 23:38:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.640 23:38:29 -- scripts/common.sh@344 -- # case "$op" in 00:04:41.640 23:38:29 -- scripts/common.sh@345 -- # : 1 00:04:41.640 23:38:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.640 23:38:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.640 23:38:29 -- scripts/common.sh@365 -- # decimal 1 00:04:41.640 23:38:29 -- scripts/common.sh@353 -- # local d=1 00:04:41.640 23:38:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.640 23:38:29 -- scripts/common.sh@355 -- # echo 1 00:04:41.640 23:38:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.640 23:38:29 -- scripts/common.sh@366 -- # decimal 2 00:04:41.640 23:38:29 -- scripts/common.sh@353 -- # local d=2 00:04:41.640 23:38:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.640 23:38:29 -- scripts/common.sh@355 -- # echo 2 00:04:41.640 23:38:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.640 23:38:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.640 23:38:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.640 23:38:29 -- scripts/common.sh@368 -- # return 0 00:04:41.640 23:38:29 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.640 23:38:29 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.640 --rc genhtml_branch_coverage=1 00:04:41.640 --rc genhtml_function_coverage=1 00:04:41.640 --rc genhtml_legend=1 00:04:41.640 --rc geninfo_all_blocks=1 00:04:41.640 --rc geninfo_unexecuted_blocks=1 00:04:41.640 00:04:41.640 ' 00:04:41.640 23:38:29 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.640 --rc genhtml_branch_coverage=1 00:04:41.640 --rc genhtml_function_coverage=1 00:04:41.640 --rc genhtml_legend=1 00:04:41.640 --rc geninfo_all_blocks=1 00:04:41.640 --rc geninfo_unexecuted_blocks=1 00:04:41.640 00:04:41.640 ' 00:04:41.640 23:38:29 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.640 --rc genhtml_branch_coverage=1 00:04:41.640 --rc genhtml_function_coverage=1 00:04:41.640 --rc genhtml_legend=1 00:04:41.640 --rc geninfo_all_blocks=1 00:04:41.640 --rc geninfo_unexecuted_blocks=1 00:04:41.640 00:04:41.640 ' 00:04:41.640 23:38:29 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.640 --rc genhtml_branch_coverage=1 00:04:41.640 --rc genhtml_function_coverage=1 00:04:41.640 --rc genhtml_legend=1 00:04:41.640 --rc geninfo_all_blocks=1 00:04:41.640 --rc geninfo_unexecuted_blocks=1 00:04:41.640 00:04:41.640 ' 00:04:41.640 23:38:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.640 23:38:29 -- nvmf/common.sh@7 -- # uname -s 00:04:41.640 23:38:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.640 23:38:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.640 23:38:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.640 23:38:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.640 23:38:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.640 23:38:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.640 23:38:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.640 23:38:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.640 23:38:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.640 23:38:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.640 23:38:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3185115-c870-4590-bddf-ec6cbfb7ca82 00:04:41.640 23:38:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=a3185115-c870-4590-bddf-ec6cbfb7ca82 00:04:41.640 23:38:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.640 23:38:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.640 23:38:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.640 23:38:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.640 23:38:29 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.640 23:38:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.640 23:38:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.641 23:38:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.641 23:38:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.641 23:38:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.641 23:38:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.641 23:38:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.641 23:38:29 -- paths/export.sh@5 -- # export PATH 00:04:41.641 23:38:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.641 23:38:29 -- nvmf/common.sh@51 -- # : 0 00:04:41.641 23:38:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.641 23:38:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.641 23:38:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.641 23:38:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.641 23:38:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.641 23:38:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.641 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.641 23:38:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.641 23:38:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.641 23:38:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.641 23:38:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.641 23:38:29 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.641 23:38:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.641 23:38:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.641 23:38:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.641 23:38:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.641 23:38:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.641 23:38:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.641 23:38:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.641 23:38:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.641 23:38:29 -- spdk/autotest.sh@48 -- # udevadm_pid=66275 00:04:41.641 23:38:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.641 23:38:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:41.641 23:38:29 -- pm/common@17 -- # local monitor 00:04:41.641 23:38:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.641 23:38:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.641 23:38:29 -- pm/common@25 -- # sleep 1 00:04:41.641 23:38:29 -- pm/common@21 -- # date +%s 00:04:41.641 23:38:29 -- pm/common@21 -- # date +%s 00:04:41.641 23:38:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732664309 00:04:41.641 23:38:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732664309 00:04:41.641 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732664309_collect-vmstat.pm.log 00:04:41.641 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732664309_collect-cpu-load.pm.log 00:04:43.022 23:38:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:43.022 23:38:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:43.022 23:38:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.022 23:38:30 -- common/autotest_common.sh@10 -- # set +x 00:04:43.022 23:38:30 -- spdk/autotest.sh@59 -- # create_test_list 00:04:43.022 23:38:30 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:43.022 23:38:30 -- common/autotest_common.sh@10 -- # set +x 00:04:43.022 23:38:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:43.022 23:38:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:43.022 23:38:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:43.022 23:38:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:43.022 23:38:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:43.022 23:38:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:43.022 23:38:30 -- common/autotest_common.sh@1457 -- # uname 00:04:43.022 23:38:30 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:43.022 23:38:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:43.022 23:38:30 -- common/autotest_common.sh@1477 -- # uname 00:04:43.022 23:38:30 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:43.022 23:38:30 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:43.022 23:38:30 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:43.022 lcov: LCOV version 1.15 00:04:43.022 23:38:30 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:57.958 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:57.958 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:12.872 23:39:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:12.873 23:39:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.873 23:39:00 -- common/autotest_common.sh@10 -- # set +x 00:05:12.873 23:39:00 -- spdk/autotest.sh@78 -- # rm -f 00:05:12.873 23:39:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.704 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:13.704 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:13.704 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:13.704 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:13.965 23:39:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:13.965 23:39:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:13.965 23:39:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:13.965 23:39:01 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:13.965 23:39:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.965 23:39:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.966 23:39:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.966 23:39:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.966 23:39:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:13.966 23:39:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:13.966 23:39:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.966 23:39:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:13.966 23:39:01 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:13.966 23:39:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.966 23:39:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:13.966 23:39:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:13.966 23:39:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:13.966 23:39:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:13.966 23:39:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:13.966 23:39:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.966 23:39:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:13.966 23:39:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:13.966 23:39:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:13.966 23:39:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:13.966 No valid GPT data, bailing 00:05:13.966 23:39:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:13.966 23:39:01 -- scripts/common.sh@394 -- # pt= 00:05:13.966 23:39:01 -- scripts/common.sh@395 -- # return 1 00:05:13.966 23:39:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:13.966 1+0 records in 00:05:13.966 1+0 records out 00:05:13.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193542 s, 54.2 MB/s 00:05:13.966 23:39:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.966 23:39:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:13.966 23:39:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:13.966 23:39:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:13.966 23:39:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:13.966 No valid GPT data, bailing 00:05:13.966 23:39:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:13.966 23:39:02 -- scripts/common.sh@394 -- # pt= 00:05:13.966 23:39:02 -- scripts/common.sh@395 -- # return 1 00:05:13.966 23:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:13.966 1+0 records in 00:05:13.966 1+0 records out 00:05:13.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529966 s, 198 MB/s 00:05:13.966 23:39:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:13.966 23:39:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:13.966 23:39:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:13.966 23:39:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:13.966 23:39:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:13.966 No valid GPT data, bailing 00:05:13.966 23:39:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:14.226 23:39:02 -- scripts/common.sh@394 -- # pt= 00:05:14.226 23:39:02 -- scripts/common.sh@395 -- # return 1 00:05:14.226 23:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:14.226 1+0 records in 00:05:14.226 1+0 records out 00:05:14.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00565968 s, 185 MB/s 00:05:14.226 23:39:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.226 23:39:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.226 23:39:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:14.226 23:39:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:14.226 23:39:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:14.226 No valid GPT data, bailing 00:05:14.226 23:39:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:14.226 23:39:02 -- scripts/common.sh@394 -- # pt= 00:05:14.226 23:39:02 -- scripts/common.sh@395 -- # return 1 00:05:14.226 23:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:14.226 1+0 records in 00:05:14.226 1+0 records out 00:05:14.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054476 s, 192 MB/s 00:05:14.226 23:39:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.226 23:39:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.226 23:39:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:14.226 23:39:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:14.226 23:39:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:14.226 No valid GPT data, bailing 00:05:14.226 23:39:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:14.226 23:39:02 -- scripts/common.sh@394 -- # pt= 00:05:14.226 23:39:02 -- scripts/common.sh@395 -- # return 1 00:05:14.226 23:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:14.226 1+0 records in 00:05:14.226 1+0 records out 00:05:14.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561906 s, 187 MB/s 00:05:14.226 23:39:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.226 23:39:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.226 23:39:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:14.226 23:39:02 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:14.226 23:39:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:14.226 No valid GPT data, bailing 00:05:14.226 23:39:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:14.226 23:39:02 -- scripts/common.sh@394 -- # pt= 00:05:14.226 23:39:02 -- scripts/common.sh@395 -- # return 1 00:05:14.226 23:39:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:14.226 1+0 records in 00:05:14.226 1+0 records out 00:05:14.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590324 s, 178 MB/s 00:05:14.226 23:39:02 -- spdk/autotest.sh@105 -- # sync 00:05:14.485 23:39:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:14.485 23:39:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:14.485 23:39:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:16.395 23:39:04 -- spdk/autotest.sh@111 -- # uname -s 00:05:16.395 23:39:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:16.395 23:39:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:16.395 23:39:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:16.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.916 Hugepages 00:05:16.916 node hugesize free / total 00:05:16.916 node0 1048576kB 0 / 0 00:05:17.176 node0 2048kB 0 / 0 00:05:17.176 00:05:17.176 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:17.176 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:17.176 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:17.176 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:17.438 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:17.438 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:17.438 23:39:05 -- spdk/autotest.sh@117 -- # uname -s 00:05:17.438 23:39:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:17.438 23:39:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:17.438 23:39:05 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.273 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.274 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.274 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.534 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.534 23:39:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:19.482 23:39:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:19.482 23:39:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:19.482 23:39:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.482 23:39:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:19.482 23:39:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:19.482 23:39:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:19.482 23:39:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.482 23:39:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:19.482 23:39:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:19.482 23:39:07 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:19.482 23:39:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:19.482 23:39:07 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.004 Waiting for block devices as requested 00:05:20.004 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.004 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.264 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.265 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.600 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:25.600 23:39:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.600 23:39:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:25.600 23:39:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.600 23:39:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:25.600 23:39:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.600 23:39:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:25.600 23:39:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.600 23:39:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:25.600 23:39:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:25.600 23:39:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:25.600 23:39:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.600 23:39:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:25.600 23:39:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.600 23:39:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.600 23:39:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.600 23:39:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.600 23:39:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:25.600 23:39:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.600 23:39:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.600 23:39:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.600 23:39:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.600 23:39:13 -- common/autotest_common.sh@1543 -- # continue 00:05:25.600 23:39:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.600 23:39:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:25.600 23:39:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.600 23:39:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:25.600 23:39:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.600 23:39:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:25.600 23:39:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.600 23:39:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:25.600 23:39:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:25.600 23:39:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:25.600 23:39:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:25.600 23:39:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.600 23:39:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.601 23:39:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.601 23:39:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.601 23:39:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1543 -- # continue 00:05:25.601 23:39:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.601 23:39:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:25.601 23:39:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.601 23:39:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:25.601 23:39:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.601 23:39:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.601 23:39:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.601 23:39:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1543 -- # continue 00:05:25.601 23:39:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.601 23:39:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:25.601 23:39:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:25.601 23:39:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:25.601 23:39:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:25.601 23:39:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:25.601 23:39:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:25.601 23:39:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:25.601 23:39:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:25.601 23:39:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.601 23:39:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.601 23:39:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.601 23:39:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.601 23:39:13 -- common/autotest_common.sh@1543 -- # continue 00:05:25.601 23:39:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.601 23:39:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.601 23:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:25.601 23:39:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.601 23:39:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.601 23:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:25.601 23:39:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.422 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.422 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.422 23:39:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:26.422 23:39:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.422 23:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:26.422 23:39:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:26.422 23:39:14 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:26.422 23:39:14 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.422 23:39:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:26.422 23:39:14 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:26.422 23:39:14 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:26.422 23:39:14 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:26.422 23:39:14 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:26.422 23:39:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:26.422 23:39:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:26.422 23:39:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.422 23:39:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.422 23:39:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:26.682 23:39:14 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:26.682 23:39:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:26.682 23:39:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.682 23:39:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:26.682 23:39:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.682 23:39:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.682 23:39:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.682 23:39:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:26.682 23:39:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.682 23:39:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.682 23:39:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.682 23:39:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:26.682 23:39:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.682 23:39:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.682 23:39:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:26.682 23:39:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:26.682 23:39:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:26.682 23:39:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.682 23:39:14 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:26.682 23:39:14 -- common/autotest_common.sh@1572 -- # return 0 00:05:26.682 23:39:14 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:26.682 23:39:14 -- common/autotest_common.sh@1580 -- # return 0 00:05:26.682 23:39:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:26.682 23:39:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:26.682 23:39:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.682 23:39:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.682 23:39:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:26.682 23:39:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.682 23:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:26.682 23:39:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:26.682 23:39:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.682 23:39:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.682 23:39:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.682 23:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:26.682 ************************************ 00:05:26.682 START TEST env 00:05:26.682 ************************************ 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.682 * Looking for test storage... 00:05:26.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.682 23:39:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.682 23:39:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.682 23:39:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.682 23:39:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.682 23:39:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.682 23:39:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.682 23:39:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.682 23:39:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.682 23:39:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.682 23:39:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.682 23:39:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.682 23:39:14 env -- scripts/common.sh@344 -- # case "$op" in 00:05:26.682 23:39:14 env -- scripts/common.sh@345 -- # : 1 00:05:26.682 23:39:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.682 23:39:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.682 23:39:14 env -- scripts/common.sh@365 -- # decimal 1 00:05:26.682 23:39:14 env -- scripts/common.sh@353 -- # local d=1 00:05:26.682 23:39:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.682 23:39:14 env -- scripts/common.sh@355 -- # echo 1 00:05:26.682 23:39:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.682 23:39:14 env -- scripts/common.sh@366 -- # decimal 2 00:05:26.682 23:39:14 env -- scripts/common.sh@353 -- # local d=2 00:05:26.682 23:39:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.682 23:39:14 env -- scripts/common.sh@355 -- # echo 2 00:05:26.682 23:39:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.682 23:39:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.682 23:39:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.682 23:39:14 env -- scripts/common.sh@368 -- # return 0 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.682 --rc genhtml_branch_coverage=1 00:05:26.682 --rc genhtml_function_coverage=1 00:05:26.682 --rc genhtml_legend=1 00:05:26.682 --rc geninfo_all_blocks=1 00:05:26.682 --rc geninfo_unexecuted_blocks=1 00:05:26.682 00:05:26.682 ' 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.682 --rc genhtml_branch_coverage=1 00:05:26.682 --rc genhtml_function_coverage=1 00:05:26.682 --rc genhtml_legend=1 00:05:26.682 --rc geninfo_all_blocks=1 00:05:26.682 --rc geninfo_unexecuted_blocks=1 00:05:26.682 00:05:26.682 ' 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.682 --rc genhtml_branch_coverage=1 00:05:26.682 --rc genhtml_function_coverage=1 00:05:26.682 --rc genhtml_legend=1 00:05:26.682 --rc geninfo_all_blocks=1 00:05:26.682 --rc geninfo_unexecuted_blocks=1 00:05:26.682 00:05:26.682 ' 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.682 --rc genhtml_branch_coverage=1 00:05:26.682 --rc genhtml_function_coverage=1 00:05:26.682 --rc genhtml_legend=1 00:05:26.682 --rc geninfo_all_blocks=1 00:05:26.682 --rc geninfo_unexecuted_blocks=1 00:05:26.682 00:05:26.682 ' 00:05:26.682 23:39:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.682 23:39:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.682 23:39:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.682 ************************************ 00:05:26.682 START TEST env_memory 00:05:26.682 ************************************ 00:05:26.682 23:39:14 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:26.682 00:05:26.682 00:05:26.682 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.682 http://cunit.sourceforge.net/ 00:05:26.682 00:05:26.682 00:05:26.682 Suite: memory 00:05:26.682 Test: alloc and free memory map ...[2024-11-26 23:39:14.797773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:26.941 passed 00:05:26.941 Test: mem map translation ...[2024-11-26 23:39:14.836695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:26.941 [2024-11-26 23:39:14.836827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:26.941 [2024-11-26 23:39:14.837248] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:26.941 [2024-11-26 23:39:14.837359] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:26.941 passed 00:05:26.941 Test: mem map registration ...[2024-11-26 23:39:14.905524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:26.941 [2024-11-26 23:39:14.905629] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:26.941 passed 00:05:26.941 Test: mem map adjacent registrations ...passed 00:05:26.941 00:05:26.941 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.941 suites 1 1 n/a 0 0 00:05:26.941 tests 4 4 4 0 0 00:05:26.941 asserts 152 152 152 0 n/a 00:05:26.941 00:05:26.941 Elapsed time = 0.233 seconds 00:05:26.941 00:05:26.941 real 0m0.261s 00:05:26.941 ************************************ 00:05:26.941 END TEST env_memory 00:05:26.941 ************************************ 00:05:26.941 user 0m0.235s 00:05:26.941 sys 0m0.017s 00:05:26.941 23:39:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.941 23:39:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:26.941 23:39:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:26.941 23:39:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.941 23:39:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.941 23:39:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.941 ************************************ 00:05:26.941 START TEST env_vtophys 00:05:26.941 ************************************ 00:05:26.941 23:39:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:26.941 EAL: lib.eal log level changed from notice to debug 00:05:26.941 EAL: Detected lcore 0 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 1 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 2 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 3 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 4 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 5 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 6 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 7 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 8 as core 0 on socket 0 00:05:26.941 EAL: Detected lcore 9 as core 0 on socket 0 00:05:27.199 EAL: Maximum logical cores by configuration: 128 00:05:27.199 EAL: Detected CPU lcores: 10 00:05:27.199 EAL: Detected NUMA nodes: 1 00:05:27.199 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:27.200 EAL: Detected shared linkage of DPDK 00:05:27.200 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:27.200 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:27.200 EAL: Registered [vdev] bus. 00:05:27.200 EAL: bus.vdev log level changed from disabled to notice 00:05:27.200 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:27.200 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:27.200 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:27.200 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:27.200 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:27.200 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:27.200 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:27.200 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:27.200 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.200 EAL: No shared files mode enabled, IPC is disabled 00:05:27.200 EAL: Selected IOVA mode 'PA' 00:05:27.200 EAL: Probing VFIO support... 00:05:27.200 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.200 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:27.200 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.200 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.200 EAL: Setting up physically contiguous memory... 00:05:27.200 EAL: Setting maximum number of open files to 524288 00:05:27.200 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.200 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.200 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.200 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.200 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.200 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.200 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.200 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.200 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.200 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.200 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.200 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.200 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.200 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.200 EAL: Hugepages will be freed exactly as allocated. 00:05:27.200 EAL: No shared files mode enabled, IPC is disabled 00:05:27.200 EAL: No shared files mode enabled, IPC is disabled 00:05:27.200 EAL: TSC frequency is ~2600000 KHz 00:05:27.200 EAL: Main lcore 0 is ready (tid=7f98d5fbaa40;cpuset=[0]) 00:05:27.200 EAL: Trying to obtain current memory policy. 00:05:27.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.200 EAL: Restoring previous memory policy: 0 00:05:27.200 EAL: request: mp_malloc_sync 00:05:27.200 EAL: No shared files mode enabled, IPC is disabled 00:05:27.200 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.200 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.200 EAL: No shared files mode enabled, IPC is disabled 00:05:27.200 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.200 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.200 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:27.200 00:05:27.200 00:05:27.200 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.200 http://cunit.sourceforge.net/ 00:05:27.200 00:05:27.200 00:05:27.200 Suite: components_suite 00:05:27.459 Test: vtophys_malloc_test ...passed 00:05:27.459 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.459 EAL: Restoring previous memory policy: 4 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.459 EAL: Trying to obtain current memory policy. 00:05:27.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.459 EAL: Restoring previous memory policy: 4 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.459 EAL: Trying to obtain current memory policy. 00:05:27.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.459 EAL: Restoring previous memory policy: 4 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.459 EAL: Trying to obtain current memory policy. 00:05:27.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.459 EAL: Restoring previous memory policy: 4 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.459 EAL: Trying to obtain current memory policy. 00:05:27.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.459 EAL: Restoring previous memory policy: 4 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.459 EAL: Trying to obtain current memory policy. 00:05:27.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.459 EAL: Restoring previous memory policy: 4 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.459 EAL: request: mp_malloc_sync 00:05:27.459 EAL: No shared files mode enabled, IPC is disabled 00:05:27.459 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.459 EAL: Trying to obtain current memory policy. 00:05:27.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.718 EAL: Restoring previous memory policy: 4 00:05:27.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.718 EAL: request: mp_malloc_sync 00:05:27.718 EAL: No shared files mode enabled, IPC is disabled 00:05:27.718 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.718 EAL: request: mp_malloc_sync 00:05:27.718 EAL: No shared files mode enabled, IPC is disabled 00:05:27.718 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.718 EAL: Trying to obtain current memory policy. 00:05:27.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.718 EAL: Restoring previous memory policy: 4 00:05:27.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.718 EAL: request: mp_malloc_sync 00:05:27.718 EAL: No shared files mode enabled, IPC is disabled 00:05:27.718 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.718 EAL: request: mp_malloc_sync 00:05:27.718 EAL: No shared files mode enabled, IPC is disabled 00:05:27.718 EAL: Heap on socket 0 was shrunk by 258MB 00:05:27.718 EAL: Trying to obtain current memory policy. 00:05:27.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.718 EAL: Restoring previous memory policy: 4 00:05:27.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.718 EAL: request: mp_malloc_sync 00:05:27.718 EAL: No shared files mode enabled, IPC is disabled 00:05:27.718 EAL: Heap on socket 0 was expanded by 514MB 00:05:27.976 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.976 EAL: request: mp_malloc_sync 00:05:27.976 EAL: No shared files mode enabled, IPC is disabled 00:05:27.976 EAL: Heap on socket 0 was shrunk by 514MB 00:05:27.976 EAL: Trying to obtain current memory policy. 00:05:27.976 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.233 EAL: Restoring previous memory policy: 4 00:05:28.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.233 EAL: request: mp_malloc_sync 00:05:28.233 EAL: No shared files mode enabled, IPC is disabled 00:05:28.233 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.490 passed 00:05:28.490 00:05:28.490 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.490 suites 1 1 n/a 0 0 00:05:28.490 tests 2 2 2 0 0 00:05:28.490 asserts 5218 5218 5218 0 n/a 00:05:28.490 00:05:28.490 Elapsed time = 1.209 seconds 00:05:28.490 EAL: request: mp_malloc_sync 00:05:28.490 EAL: No shared files mode enabled, IPC is disabled 00:05:28.490 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:28.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.491 EAL: request: mp_malloc_sync 00:05:28.491 EAL: No shared files mode enabled, IPC is disabled 00:05:28.491 EAL: Heap on socket 0 was shrunk by 2MB 00:05:28.491 EAL: No shared files mode enabled, IPC is disabled 00:05:28.491 EAL: No shared files mode enabled, IPC is disabled 00:05:28.491 EAL: No shared files mode enabled, IPC is disabled 00:05:28.491 00:05:28.491 real 0m1.432s 00:05:28.491 user 0m0.606s 00:05:28.491 sys 0m0.686s 00:05:28.491 23:39:16 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.491 ************************************ 00:05:28.491 END TEST env_vtophys 00:05:28.491 ************************************ 00:05:28.491 23:39:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:28.491 23:39:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.491 23:39:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.491 23:39:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.491 23:39:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.491 ************************************ 00:05:28.491 START TEST env_pci 00:05:28.491 ************************************ 00:05:28.491 23:39:16 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:28.491 00:05:28.491 00:05:28.491 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.491 http://cunit.sourceforge.net/ 00:05:28.491 00:05:28.491 00:05:28.491 Suite: pci 00:05:28.491 Test: pci_hook ...[2024-11-26 23:39:16.539063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69018 has claimed it 00:05:28.491 passed 00:05:28.491 00:05:28.491 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.491 suites 1 1 n/a 0 0 00:05:28.491 tests 1 1 1 0 0 00:05:28.491 asserts 25 25 25 0 n/a 00:05:28.491 00:05:28.491 Elapsed time = 0.005 seconds 00:05:28.491 EAL: Cannot find device (10000:00:01.0) 00:05:28.491 EAL: Failed to attach device on primary process 00:05:28.491 00:05:28.491 real 0m0.053s 00:05:28.491 user 0m0.021s 00:05:28.491 sys 0m0.031s 00:05:28.491 23:39:16 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.491 23:39:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:28.491 ************************************ 00:05:28.491 END TEST env_pci 00:05:28.491 ************************************ 00:05:28.491 23:39:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.491 23:39:16 env -- env/env.sh@15 -- # uname 00:05:28.491 23:39:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.491 23:39:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.491 23:39:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.491 23:39:16 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:28.491 23:39:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.491 23:39:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.491 ************************************ 00:05:28.491 START TEST env_dpdk_post_init 00:05:28.491 ************************************ 00:05:28.491 23:39:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.748 EAL: Detected CPU lcores: 10 00:05:28.748 EAL: Detected NUMA nodes: 1 00:05:28.748 EAL: Detected shared linkage of DPDK 00:05:28.748 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.748 EAL: Selected IOVA mode 'PA' 00:05:28.748 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.748 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:28.748 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:28.748 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:28.748 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:28.748 Starting DPDK initialization... 00:05:28.748 Starting SPDK post initialization... 00:05:28.748 SPDK NVMe probe 00:05:28.748 Attaching to 0000:00:10.0 00:05:28.748 Attaching to 0000:00:11.0 00:05:28.748 Attaching to 0000:00:12.0 00:05:28.748 Attaching to 0000:00:13.0 00:05:28.748 Attached to 0000:00:10.0 00:05:28.748 Attached to 0000:00:11.0 00:05:28.748 Attached to 0000:00:13.0 00:05:28.748 Attached to 0000:00:12.0 00:05:28.748 Cleaning up... 00:05:28.748 00:05:28.748 real 0m0.216s 00:05:28.748 user 0m0.063s 00:05:28.748 sys 0m0.056s 00:05:28.748 23:39:16 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.748 23:39:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.748 ************************************ 00:05:28.748 END TEST env_dpdk_post_init 00:05:28.748 ************************************ 00:05:28.748 23:39:16 env -- env/env.sh@26 -- # uname 00:05:28.748 23:39:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.748 23:39:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.749 23:39:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.749 23:39:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.749 23:39:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.749 ************************************ 00:05:28.749 START TEST env_mem_callbacks 00:05:28.749 ************************************ 00:05:28.749 23:39:16 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.006 EAL: Detected CPU lcores: 10 00:05:29.006 EAL: Detected NUMA nodes: 1 00:05:29.006 EAL: Detected shared linkage of DPDK 00:05:29.006 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.006 EAL: Selected IOVA mode 'PA' 00:05:29.006 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.006 00:05:29.006 00:05:29.006 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.006 http://cunit.sourceforge.net/ 00:05:29.006 00:05:29.006 00:05:29.006 Suite: memory 00:05:29.006 Test: test ... 00:05:29.006 register 0x200000200000 2097152 00:05:29.006 malloc 3145728 00:05:29.006 register 0x200000400000 4194304 00:05:29.006 buf 0x200000500000 len 3145728 PASSED 00:05:29.006 malloc 64 00:05:29.006 buf 0x2000004fff40 len 64 PASSED 00:05:29.006 malloc 4194304 00:05:29.006 register 0x200000800000 6291456 00:05:29.006 buf 0x200000a00000 len 4194304 PASSED 00:05:29.006 free 0x200000500000 3145728 00:05:29.006 free 0x2000004fff40 64 00:05:29.006 unregister 0x200000400000 4194304 PASSED 00:05:29.006 free 0x200000a00000 4194304 00:05:29.006 unregister 0x200000800000 6291456 PASSED 00:05:29.006 malloc 8388608 00:05:29.006 register 0x200000400000 10485760 00:05:29.006 buf 0x200000600000 len 8388608 PASSED 00:05:29.006 free 0x200000600000 8388608 00:05:29.006 unregister 0x200000400000 10485760 PASSED 00:05:29.006 passed 00:05:29.006 00:05:29.006 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.006 suites 1 1 n/a 0 0 00:05:29.006 tests 1 1 1 0 0 00:05:29.006 asserts 15 15 15 0 n/a 00:05:29.006 00:05:29.006 Elapsed time = 0.011 seconds 00:05:29.006 00:05:29.007 real 0m0.149s 00:05:29.007 user 0m0.019s 00:05:29.007 sys 0m0.029s 00:05:29.007 23:39:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.007 23:39:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.007 ************************************ 00:05:29.007 END TEST env_mem_callbacks 00:05:29.007 ************************************ 00:05:29.007 00:05:29.007 real 0m2.460s 00:05:29.007 user 0m1.098s 00:05:29.007 sys 0m1.018s 00:05:29.007 23:39:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.007 23:39:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.007 ************************************ 00:05:29.007 END TEST env 00:05:29.007 ************************************ 00:05:29.007 23:39:17 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.007 23:39:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.007 23:39:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.007 23:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.007 ************************************ 00:05:29.007 START TEST rpc 00:05:29.007 ************************************ 00:05:29.007 23:39:17 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.265 * Looking for test storage... 00:05:29.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:29.265 23:39:17 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.265 23:39:17 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.265 23:39:17 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.265 23:39:17 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.265 23:39:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.265 23:39:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.265 23:39:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.265 23:39:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.265 23:39:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.265 23:39:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.265 23:39:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.265 23:39:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.265 23:39:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.265 23:39:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.265 23:39:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.265 23:39:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.265 23:39:17 rpc -- scripts/common.sh@345 -- # : 1 00:05:29.265 23:39:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.265 23:39:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.265 23:39:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.266 23:39:17 rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.266 23:39:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.266 23:39:17 rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.266 23:39:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.266 23:39:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.266 23:39:17 rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.266 23:39:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.266 23:39:17 rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.266 23:39:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.266 23:39:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.266 23:39:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.266 23:39:17 rpc -- scripts/common.sh@368 -- # return 0 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.266 --rc genhtml_branch_coverage=1 00:05:29.266 --rc genhtml_function_coverage=1 00:05:29.266 --rc genhtml_legend=1 00:05:29.266 --rc geninfo_all_blocks=1 00:05:29.266 --rc geninfo_unexecuted_blocks=1 00:05:29.266 00:05:29.266 ' 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.266 --rc genhtml_branch_coverage=1 00:05:29.266 --rc genhtml_function_coverage=1 00:05:29.266 --rc genhtml_legend=1 00:05:29.266 --rc geninfo_all_blocks=1 00:05:29.266 --rc geninfo_unexecuted_blocks=1 00:05:29.266 00:05:29.266 ' 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.266 --rc genhtml_branch_coverage=1 00:05:29.266 --rc genhtml_function_coverage=1 00:05:29.266 --rc genhtml_legend=1 00:05:29.266 --rc geninfo_all_blocks=1 00:05:29.266 --rc geninfo_unexecuted_blocks=1 00:05:29.266 00:05:29.266 ' 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.266 --rc genhtml_branch_coverage=1 00:05:29.266 --rc genhtml_function_coverage=1 00:05:29.266 --rc genhtml_legend=1 00:05:29.266 --rc geninfo_all_blocks=1 00:05:29.266 --rc geninfo_unexecuted_blocks=1 00:05:29.266 00:05:29.266 ' 00:05:29.266 23:39:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69145 00:05:29.266 23:39:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.266 23:39:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69145 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@835 -- # '[' -z 69145 ']' 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.266 23:39:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.266 23:39:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.266 [2024-11-26 23:39:17.307615] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:29.266 [2024-11-26 23:39:17.307743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69145 ] 00:05:29.524 [2024-11-26 23:39:17.454709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.524 [2024-11-26 23:39:17.478493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.524 [2024-11-26 23:39:17.478543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69145' to capture a snapshot of events at runtime. 00:05:29.524 [2024-11-26 23:39:17.478555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.524 [2024-11-26 23:39:17.478567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.524 [2024-11-26 23:39:17.478578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69145 for offline analysis/debug. 00:05:29.524 [2024-11-26 23:39:17.478929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.091 23:39:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.091 23:39:18 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.091 23:39:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.091 23:39:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.091 23:39:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.091 23:39:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.091 23:39:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.091 23:39:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.091 23:39:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.091 ************************************ 00:05:30.091 START TEST rpc_integrity 00:05:30.091 ************************************ 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.091 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.091 { 00:05:30.091 "name": "Malloc0", 00:05:30.091 "aliases": [ 00:05:30.091 "2824a44e-20e8-4a5d-b2d8-0f337fba72fb" 00:05:30.091 ], 00:05:30.091 "product_name": "Malloc disk", 00:05:30.091 "block_size": 512, 00:05:30.091 "num_blocks": 16384, 00:05:30.091 "uuid": "2824a44e-20e8-4a5d-b2d8-0f337fba72fb", 00:05:30.091 "assigned_rate_limits": { 00:05:30.091 "rw_ios_per_sec": 0, 00:05:30.091 "rw_mbytes_per_sec": 0, 00:05:30.091 "r_mbytes_per_sec": 0, 00:05:30.091 "w_mbytes_per_sec": 0 00:05:30.091 }, 00:05:30.091 "claimed": false, 00:05:30.091 "zoned": false, 00:05:30.091 "supported_io_types": { 00:05:30.091 "read": true, 00:05:30.091 "write": true, 00:05:30.091 "unmap": true, 00:05:30.091 "flush": true, 00:05:30.091 "reset": true, 00:05:30.091 "nvme_admin": false, 00:05:30.091 "nvme_io": false, 00:05:30.091 "nvme_io_md": false, 00:05:30.091 "write_zeroes": true, 00:05:30.091 "zcopy": true, 00:05:30.091 "get_zone_info": false, 00:05:30.091 "zone_management": false, 00:05:30.091 "zone_append": false, 00:05:30.091 "compare": false, 00:05:30.091 "compare_and_write": false, 00:05:30.091 "abort": true, 00:05:30.091 "seek_hole": false, 00:05:30.091 "seek_data": false, 00:05:30.091 "copy": true, 00:05:30.091 "nvme_iov_md": false 00:05:30.091 }, 00:05:30.091 "memory_domains": [ 00:05:30.091 { 00:05:30.091 "dma_device_id": "system", 00:05:30.091 "dma_device_type": 1 00:05:30.091 }, 00:05:30.091 { 00:05:30.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.091 "dma_device_type": 2 00:05:30.091 } 00:05:30.091 ], 00:05:30.091 "driver_specific": {} 00:05:30.091 } 00:05:30.091 ]' 00:05:30.091 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.350 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.350 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.350 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.350 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.350 [2024-11-26 23:39:18.237635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.350 [2024-11-26 23:39:18.237695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.350 [2024-11-26 23:39:18.237728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:30.350 [2024-11-26 23:39:18.237739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.350 [2024-11-26 23:39:18.240089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.350 [2024-11-26 23:39:18.240125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.350 Passthru0 00:05:30.350 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.351 { 00:05:30.351 "name": "Malloc0", 00:05:30.351 "aliases": [ 00:05:30.351 "2824a44e-20e8-4a5d-b2d8-0f337fba72fb" 00:05:30.351 ], 00:05:30.351 "product_name": "Malloc disk", 00:05:30.351 "block_size": 512, 00:05:30.351 "num_blocks": 16384, 00:05:30.351 "uuid": "2824a44e-20e8-4a5d-b2d8-0f337fba72fb", 00:05:30.351 "assigned_rate_limits": { 00:05:30.351 "rw_ios_per_sec": 0, 00:05:30.351 "rw_mbytes_per_sec": 0, 00:05:30.351 "r_mbytes_per_sec": 0, 00:05:30.351 "w_mbytes_per_sec": 0 00:05:30.351 }, 00:05:30.351 "claimed": true, 00:05:30.351 "claim_type": "exclusive_write", 00:05:30.351 "zoned": false, 00:05:30.351 "supported_io_types": { 00:05:30.351 "read": true, 00:05:30.351 "write": true, 00:05:30.351 "unmap": true, 00:05:30.351 "flush": true, 00:05:30.351 "reset": true, 00:05:30.351 "nvme_admin": false, 00:05:30.351 "nvme_io": false, 00:05:30.351 "nvme_io_md": false, 00:05:30.351 "write_zeroes": true, 00:05:30.351 "zcopy": true, 00:05:30.351 "get_zone_info": false, 00:05:30.351 "zone_management": false, 00:05:30.351 "zone_append": false, 00:05:30.351 "compare": false, 00:05:30.351 "compare_and_write": false, 00:05:30.351 "abort": true, 00:05:30.351 "seek_hole": false, 00:05:30.351 "seek_data": false, 00:05:30.351 "copy": true, 00:05:30.351 "nvme_iov_md": false 00:05:30.351 }, 00:05:30.351 "memory_domains": [ 00:05:30.351 { 00:05:30.351 "dma_device_id": "system", 00:05:30.351 "dma_device_type": 1 00:05:30.351 }, 00:05:30.351 { 00:05:30.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.351 "dma_device_type": 2 00:05:30.351 } 00:05:30.351 ], 00:05:30.351 "driver_specific": {} 00:05:30.351 }, 00:05:30.351 { 00:05:30.351 "name": "Passthru0", 00:05:30.351 "aliases": [ 00:05:30.351 "3a06d504-9ae0-5f60-a537-c52580856d96" 00:05:30.351 ], 00:05:30.351 "product_name": "passthru", 00:05:30.351 "block_size": 512, 00:05:30.351 "num_blocks": 16384, 00:05:30.351 "uuid": "3a06d504-9ae0-5f60-a537-c52580856d96", 00:05:30.351 "assigned_rate_limits": { 00:05:30.351 "rw_ios_per_sec": 0, 00:05:30.351 "rw_mbytes_per_sec": 0, 00:05:30.351 "r_mbytes_per_sec": 0, 00:05:30.351 "w_mbytes_per_sec": 0 00:05:30.351 }, 00:05:30.351 "claimed": false, 00:05:30.351 "zoned": false, 00:05:30.351 "supported_io_types": { 00:05:30.351 "read": true, 00:05:30.351 "write": true, 00:05:30.351 "unmap": true, 00:05:30.351 "flush": true, 00:05:30.351 "reset": true, 00:05:30.351 "nvme_admin": false, 00:05:30.351 "nvme_io": false, 00:05:30.351 "nvme_io_md": false, 00:05:30.351 "write_zeroes": true, 00:05:30.351 "zcopy": true, 00:05:30.351 "get_zone_info": false, 00:05:30.351 "zone_management": false, 00:05:30.351 "zone_append": false, 00:05:30.351 "compare": false, 00:05:30.351 "compare_and_write": false, 00:05:30.351 "abort": true, 00:05:30.351 "seek_hole": false, 00:05:30.351 "seek_data": false, 00:05:30.351 "copy": true, 00:05:30.351 "nvme_iov_md": false 00:05:30.351 }, 00:05:30.351 "memory_domains": [ 00:05:30.351 { 00:05:30.351 "dma_device_id": "system", 00:05:30.351 "dma_device_type": 1 00:05:30.351 }, 00:05:30.351 { 00:05:30.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.351 "dma_device_type": 2 00:05:30.351 } 00:05:30.351 ], 00:05:30.351 "driver_specific": { 00:05:30.351 "passthru": { 00:05:30.351 "name": "Passthru0", 00:05:30.351 "base_bdev_name": "Malloc0" 00:05:30.351 } 00:05:30.351 } 00:05:30.351 } 00:05:30.351 ]' 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.351 23:39:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.351 00:05:30.351 real 0m0.209s 00:05:30.351 user 0m0.126s 00:05:30.351 sys 0m0.029s 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.351 ************************************ 00:05:30.351 END TEST rpc_integrity 00:05:30.351 ************************************ 00:05:30.351 23:39:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 23:39:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.351 23:39:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.351 23:39:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.351 23:39:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 ************************************ 00:05:30.351 START TEST rpc_plugins 00:05:30.351 ************************************ 00:05:30.351 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:30.351 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.351 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.351 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.351 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.351 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.351 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.351 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.351 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.351 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.351 { 00:05:30.351 "name": "Malloc1", 00:05:30.351 "aliases": [ 00:05:30.351 "1ff38594-7b40-4fb5-915b-1f8403fb49c7" 00:05:30.352 ], 00:05:30.352 "product_name": "Malloc disk", 00:05:30.352 "block_size": 4096, 00:05:30.352 "num_blocks": 256, 00:05:30.352 "uuid": "1ff38594-7b40-4fb5-915b-1f8403fb49c7", 00:05:30.352 "assigned_rate_limits": { 00:05:30.352 "rw_ios_per_sec": 0, 00:05:30.352 "rw_mbytes_per_sec": 0, 00:05:30.352 "r_mbytes_per_sec": 0, 00:05:30.352 "w_mbytes_per_sec": 0 00:05:30.352 }, 00:05:30.352 "claimed": false, 00:05:30.352 "zoned": false, 00:05:30.352 "supported_io_types": { 00:05:30.352 "read": true, 00:05:30.352 "write": true, 00:05:30.352 "unmap": true, 00:05:30.352 "flush": true, 00:05:30.352 "reset": true, 00:05:30.352 "nvme_admin": false, 00:05:30.352 "nvme_io": false, 00:05:30.352 "nvme_io_md": false, 00:05:30.352 "write_zeroes": true, 00:05:30.352 "zcopy": true, 00:05:30.352 "get_zone_info": false, 00:05:30.352 "zone_management": false, 00:05:30.352 "zone_append": false, 00:05:30.352 "compare": false, 00:05:30.352 "compare_and_write": false, 00:05:30.352 "abort": true, 00:05:30.352 "seek_hole": false, 00:05:30.352 "seek_data": false, 00:05:30.352 "copy": true, 00:05:30.352 "nvme_iov_md": false 00:05:30.352 }, 00:05:30.352 "memory_domains": [ 00:05:30.352 { 00:05:30.352 "dma_device_id": "system", 00:05:30.352 "dma_device_type": 1 00:05:30.352 }, 00:05:30.352 { 00:05:30.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.352 "dma_device_type": 2 00:05:30.352 } 00:05:30.352 ], 00:05:30.352 "driver_specific": {} 00:05:30.352 } 00:05:30.352 ]' 00:05:30.352 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:30.352 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.352 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.352 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.352 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.352 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.352 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.352 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.352 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.352 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.352 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.352 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:30.611 23:39:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.611 00:05:30.611 real 0m0.113s 00:05:30.611 user 0m0.064s 00:05:30.611 sys 0m0.014s 00:05:30.611 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.611 ************************************ 00:05:30.611 END TEST rpc_plugins 00:05:30.611 ************************************ 00:05:30.611 23:39:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.611 23:39:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.611 23:39:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.611 23:39:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.611 23:39:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.611 ************************************ 00:05:30.611 START TEST rpc_trace_cmd_test 00:05:30.611 ************************************ 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:30.611 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69145", 00:05:30.611 "tpoint_group_mask": "0x8", 00:05:30.611 "iscsi_conn": { 00:05:30.611 "mask": "0x2", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "scsi": { 00:05:30.611 "mask": "0x4", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "bdev": { 00:05:30.611 "mask": "0x8", 00:05:30.611 "tpoint_mask": "0xffffffffffffffff" 00:05:30.611 }, 00:05:30.611 "nvmf_rdma": { 00:05:30.611 "mask": "0x10", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "nvmf_tcp": { 00:05:30.611 "mask": "0x20", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "ftl": { 00:05:30.611 "mask": "0x40", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "blobfs": { 00:05:30.611 "mask": "0x80", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "dsa": { 00:05:30.611 "mask": "0x200", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "thread": { 00:05:30.611 "mask": "0x400", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "nvme_pcie": { 00:05:30.611 "mask": "0x800", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "iaa": { 00:05:30.611 "mask": "0x1000", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "nvme_tcp": { 00:05:30.611 "mask": "0x2000", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "bdev_nvme": { 00:05:30.611 "mask": "0x4000", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "sock": { 00:05:30.611 "mask": "0x8000", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "blob": { 00:05:30.611 "mask": "0x10000", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "bdev_raid": { 00:05:30.611 "mask": "0x20000", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 }, 00:05:30.611 "scheduler": { 00:05:30.611 "mask": "0x40000", 00:05:30.611 "tpoint_mask": "0x0" 00:05:30.611 } 00:05:30.611 }' 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.611 00:05:30.611 real 0m0.164s 00:05:30.611 user 0m0.135s 00:05:30.611 sys 0m0.021s 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.611 23:39:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.611 ************************************ 00:05:30.611 END TEST rpc_trace_cmd_test 00:05:30.611 ************************************ 00:05:30.611 23:39:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.611 23:39:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.611 23:39:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.611 23:39:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.611 23:39:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.611 23:39:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.611 ************************************ 00:05:30.611 START TEST rpc_daemon_integrity 00:05:30.611 ************************************ 00:05:30.611 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.611 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.611 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.611 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.888 { 00:05:30.888 "name": "Malloc2", 00:05:30.888 "aliases": [ 00:05:30.888 "ccbdc7d0-d64f-468d-80a9-150f8b699ace" 00:05:30.888 ], 00:05:30.888 "product_name": "Malloc disk", 00:05:30.888 "block_size": 512, 00:05:30.888 "num_blocks": 16384, 00:05:30.888 "uuid": "ccbdc7d0-d64f-468d-80a9-150f8b699ace", 00:05:30.888 "assigned_rate_limits": { 00:05:30.888 "rw_ios_per_sec": 0, 00:05:30.888 "rw_mbytes_per_sec": 0, 00:05:30.888 "r_mbytes_per_sec": 0, 00:05:30.888 "w_mbytes_per_sec": 0 00:05:30.888 }, 00:05:30.888 "claimed": false, 00:05:30.888 "zoned": false, 00:05:30.888 "supported_io_types": { 00:05:30.888 "read": true, 00:05:30.888 "write": true, 00:05:30.888 "unmap": true, 00:05:30.888 "flush": true, 00:05:30.888 "reset": true, 00:05:30.888 "nvme_admin": false, 00:05:30.888 "nvme_io": false, 00:05:30.888 "nvme_io_md": false, 00:05:30.888 "write_zeroes": true, 00:05:30.888 "zcopy": true, 00:05:30.888 "get_zone_info": false, 00:05:30.888 "zone_management": false, 00:05:30.888 "zone_append": false, 00:05:30.888 "compare": false, 00:05:30.888 "compare_and_write": false, 00:05:30.888 "abort": true, 00:05:30.888 "seek_hole": false, 00:05:30.888 "seek_data": false, 00:05:30.888 "copy": true, 00:05:30.888 "nvme_iov_md": false 00:05:30.888 }, 00:05:30.888 "memory_domains": [ 00:05:30.888 { 00:05:30.888 "dma_device_id": "system", 00:05:30.888 "dma_device_type": 1 00:05:30.888 }, 00:05:30.888 { 00:05:30.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.888 "dma_device_type": 2 00:05:30.888 } 00:05:30.888 ], 00:05:30.888 "driver_specific": {} 00:05:30.888 } 00:05:30.888 ]' 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.888 [2024-11-26 23:39:18.837856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.888 [2024-11-26 23:39:18.837912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.888 [2024-11-26 23:39:18.837935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:30.888 [2024-11-26 23:39:18.837945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.888 [2024-11-26 23:39:18.839806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.888 [2024-11-26 23:39:18.839835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.888 Passthru0 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.888 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.888 { 00:05:30.888 "name": "Malloc2", 00:05:30.888 "aliases": [ 00:05:30.888 "ccbdc7d0-d64f-468d-80a9-150f8b699ace" 00:05:30.888 ], 00:05:30.888 "product_name": "Malloc disk", 00:05:30.888 "block_size": 512, 00:05:30.888 "num_blocks": 16384, 00:05:30.888 "uuid": "ccbdc7d0-d64f-468d-80a9-150f8b699ace", 00:05:30.888 "assigned_rate_limits": { 00:05:30.888 "rw_ios_per_sec": 0, 00:05:30.888 "rw_mbytes_per_sec": 0, 00:05:30.888 "r_mbytes_per_sec": 0, 00:05:30.888 "w_mbytes_per_sec": 0 00:05:30.888 }, 00:05:30.888 "claimed": true, 00:05:30.888 "claim_type": "exclusive_write", 00:05:30.888 "zoned": false, 00:05:30.888 "supported_io_types": { 00:05:30.888 "read": true, 00:05:30.888 "write": true, 00:05:30.888 "unmap": true, 00:05:30.888 "flush": true, 00:05:30.888 "reset": true, 00:05:30.888 "nvme_admin": false, 00:05:30.888 "nvme_io": false, 00:05:30.888 "nvme_io_md": false, 00:05:30.888 "write_zeroes": true, 00:05:30.888 "zcopy": true, 00:05:30.888 "get_zone_info": false, 00:05:30.888 "zone_management": false, 00:05:30.888 "zone_append": false, 00:05:30.888 "compare": false, 00:05:30.888 "compare_and_write": false, 00:05:30.888 "abort": true, 00:05:30.888 "seek_hole": false, 00:05:30.888 "seek_data": false, 00:05:30.888 "copy": true, 00:05:30.888 "nvme_iov_md": false 00:05:30.888 }, 00:05:30.888 "memory_domains": [ 00:05:30.888 { 00:05:30.888 "dma_device_id": "system", 00:05:30.888 "dma_device_type": 1 00:05:30.888 }, 00:05:30.888 { 00:05:30.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.888 "dma_device_type": 2 00:05:30.888 } 00:05:30.888 ], 00:05:30.888 "driver_specific": {} 00:05:30.888 }, 00:05:30.888 { 00:05:30.888 "name": "Passthru0", 00:05:30.888 "aliases": [ 00:05:30.888 "2cdeda2e-ac6d-5904-8d87-6ee846a97bc6" 00:05:30.888 ], 00:05:30.888 "product_name": "passthru", 00:05:30.888 "block_size": 512, 00:05:30.888 "num_blocks": 16384, 00:05:30.888 "uuid": "2cdeda2e-ac6d-5904-8d87-6ee846a97bc6", 00:05:30.888 "assigned_rate_limits": { 00:05:30.888 "rw_ios_per_sec": 0, 00:05:30.888 "rw_mbytes_per_sec": 0, 00:05:30.888 "r_mbytes_per_sec": 0, 00:05:30.888 "w_mbytes_per_sec": 0 00:05:30.888 }, 00:05:30.888 "claimed": false, 00:05:30.888 "zoned": false, 00:05:30.888 "supported_io_types": { 00:05:30.888 "read": true, 00:05:30.888 "write": true, 00:05:30.888 "unmap": true, 00:05:30.888 "flush": true, 00:05:30.888 "reset": true, 00:05:30.888 "nvme_admin": false, 00:05:30.888 "nvme_io": false, 00:05:30.888 "nvme_io_md": false, 00:05:30.888 "write_zeroes": true, 00:05:30.888 "zcopy": true, 00:05:30.888 "get_zone_info": false, 00:05:30.888 "zone_management": false, 00:05:30.888 "zone_append": false, 00:05:30.888 "compare": false, 00:05:30.888 "compare_and_write": false, 00:05:30.888 "abort": true, 00:05:30.888 "seek_hole": false, 00:05:30.888 "seek_data": false, 00:05:30.888 "copy": true, 00:05:30.888 "nvme_iov_md": false 00:05:30.888 }, 00:05:30.888 "memory_domains": [ 00:05:30.888 { 00:05:30.888 "dma_device_id": "system", 00:05:30.888 "dma_device_type": 1 00:05:30.888 }, 00:05:30.888 { 00:05:30.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.888 "dma_device_type": 2 00:05:30.888 } 00:05:30.888 ], 00:05:30.888 "driver_specific": { 00:05:30.889 "passthru": { 00:05:30.889 "name": "Passthru0", 00:05:30.889 "base_bdev_name": "Malloc2" 00:05:30.889 } 00:05:30.889 } 00:05:30.889 } 00:05:30.889 ]' 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.889 00:05:30.889 real 0m0.206s 00:05:30.889 user 0m0.122s 00:05:30.889 sys 0m0.030s 00:05:30.889 ************************************ 00:05:30.889 END TEST rpc_daemon_integrity 00:05:30.889 ************************************ 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.889 23:39:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 23:39:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:30.889 23:39:18 rpc -- rpc/rpc.sh@84 -- # killprocess 69145 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 69145 ']' 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@958 -- # kill -0 69145 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@959 -- # uname 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69145 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.889 killing process with pid 69145 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69145' 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@973 -- # kill 69145 00:05:30.889 23:39:18 rpc -- common/autotest_common.sh@978 -- # wait 69145 00:05:31.169 00:05:31.169 real 0m2.198s 00:05:31.169 user 0m2.579s 00:05:31.169 sys 0m0.596s 00:05:31.169 23:39:19 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.169 ************************************ 00:05:31.169 END TEST rpc 00:05:31.169 ************************************ 00:05:31.169 23:39:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.431 23:39:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:31.431 23:39:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.431 23:39:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.431 23:39:19 -- common/autotest_common.sh@10 -- # set +x 00:05:31.431 ************************************ 00:05:31.431 START TEST skip_rpc 00:05:31.431 ************************************ 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:31.431 * Looking for test storage... 00:05:31.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.431 23:39:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.431 --rc genhtml_branch_coverage=1 00:05:31.431 --rc genhtml_function_coverage=1 00:05:31.431 --rc genhtml_legend=1 00:05:31.431 --rc geninfo_all_blocks=1 00:05:31.431 --rc geninfo_unexecuted_blocks=1 00:05:31.431 00:05:31.431 ' 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.431 --rc genhtml_branch_coverage=1 00:05:31.431 --rc genhtml_function_coverage=1 00:05:31.431 --rc genhtml_legend=1 00:05:31.431 --rc geninfo_all_blocks=1 00:05:31.431 --rc geninfo_unexecuted_blocks=1 00:05:31.431 00:05:31.431 ' 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.431 --rc genhtml_branch_coverage=1 00:05:31.431 --rc genhtml_function_coverage=1 00:05:31.431 --rc genhtml_legend=1 00:05:31.431 --rc geninfo_all_blocks=1 00:05:31.431 --rc geninfo_unexecuted_blocks=1 00:05:31.431 00:05:31.431 ' 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.431 --rc genhtml_branch_coverage=1 00:05:31.431 --rc genhtml_function_coverage=1 00:05:31.431 --rc genhtml_legend=1 00:05:31.431 --rc geninfo_all_blocks=1 00:05:31.431 --rc geninfo_unexecuted_blocks=1 00:05:31.431 00:05:31.431 ' 00:05:31.431 23:39:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:31.431 23:39:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:31.431 23:39:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.431 23:39:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.431 ************************************ 00:05:31.431 START TEST skip_rpc 00:05:31.431 ************************************ 00:05:31.431 23:39:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:31.431 23:39:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69341 00:05:31.431 23:39:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.431 23:39:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.431 23:39:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.431 [2024-11-26 23:39:19.549444] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:31.431 [2024-11-26 23:39:19.549580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69341 ] 00:05:31.703 [2024-11-26 23:39:19.693101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.703 [2024-11-26 23:39:19.716220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69341 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69341 ']' 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69341 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69341 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.985 killing process with pid 69341 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69341' 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69341 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69341 00:05:36.985 00:05:36.985 real 0m5.336s 00:05:36.985 user 0m4.973s 00:05:36.985 sys 0m0.266s 00:05:36.985 ************************************ 00:05:36.985 END TEST skip_rpc 00:05:36.985 ************************************ 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.985 23:39:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.985 23:39:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.985 23:39:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.985 23:39:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.985 23:39:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.985 ************************************ 00:05:36.985 START TEST skip_rpc_with_json 00:05:36.985 ************************************ 00:05:36.985 23:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69423 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69423 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69423 ']' 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.986 23:39:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.986 [2024-11-26 23:39:24.935163] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:36.986 [2024-11-26 23:39:24.935272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69423 ] 00:05:36.986 [2024-11-26 23:39:25.076223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.986 [2024-11-26 23:39:25.107355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.928 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.928 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:37.928 23:39:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.928 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.929 [2024-11-26 23:39:25.782782] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.929 request: 00:05:37.929 { 00:05:37.929 "trtype": "tcp", 00:05:37.929 "method": "nvmf_get_transports", 00:05:37.929 "req_id": 1 00:05:37.929 } 00:05:37.929 Got JSON-RPC error response 00:05:37.929 response: 00:05:37.929 { 00:05:37.929 "code": -19, 00:05:37.929 "message": "No such device" 00:05:37.929 } 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.929 [2024-11-26 23:39:25.794922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.929 23:39:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:37.929 { 00:05:37.929 "subsystems": [ 00:05:37.929 { 00:05:37.929 "subsystem": "fsdev", 00:05:37.929 "config": [ 00:05:37.929 { 00:05:37.929 "method": "fsdev_set_opts", 00:05:37.929 "params": { 00:05:37.929 "fsdev_io_pool_size": 65535, 00:05:37.929 "fsdev_io_cache_size": 256 00:05:37.929 } 00:05:37.929 } 00:05:37.929 ] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "keyring", 00:05:37.929 "config": [] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "iobuf", 00:05:37.929 "config": [ 00:05:37.929 { 00:05:37.929 "method": "iobuf_set_options", 00:05:37.929 "params": { 00:05:37.929 "small_pool_count": 8192, 00:05:37.929 "large_pool_count": 1024, 00:05:37.929 "small_bufsize": 8192, 00:05:37.929 "large_bufsize": 135168, 00:05:37.929 "enable_numa": false 00:05:37.929 } 00:05:37.929 } 00:05:37.929 ] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "sock", 00:05:37.929 "config": [ 00:05:37.929 { 00:05:37.929 "method": "sock_set_default_impl", 00:05:37.929 "params": { 00:05:37.929 "impl_name": "posix" 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "sock_impl_set_options", 00:05:37.929 "params": { 00:05:37.929 "impl_name": "ssl", 00:05:37.929 "recv_buf_size": 4096, 00:05:37.929 "send_buf_size": 4096, 00:05:37.929 "enable_recv_pipe": true, 00:05:37.929 "enable_quickack": false, 00:05:37.929 "enable_placement_id": 0, 00:05:37.929 "enable_zerocopy_send_server": true, 00:05:37.929 "enable_zerocopy_send_client": false, 00:05:37.929 "zerocopy_threshold": 0, 00:05:37.929 "tls_version": 0, 00:05:37.929 "enable_ktls": false 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "sock_impl_set_options", 00:05:37.929 "params": { 00:05:37.929 "impl_name": "posix", 00:05:37.929 "recv_buf_size": 2097152, 00:05:37.929 "send_buf_size": 2097152, 00:05:37.929 "enable_recv_pipe": true, 00:05:37.929 "enable_quickack": false, 00:05:37.929 "enable_placement_id": 0, 00:05:37.929 "enable_zerocopy_send_server": true, 00:05:37.929 "enable_zerocopy_send_client": false, 00:05:37.929 "zerocopy_threshold": 0, 00:05:37.929 "tls_version": 0, 00:05:37.929 "enable_ktls": false 00:05:37.929 } 00:05:37.929 } 00:05:37.929 ] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "vmd", 00:05:37.929 "config": [] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "accel", 00:05:37.929 "config": [ 00:05:37.929 { 00:05:37.929 "method": "accel_set_options", 00:05:37.929 "params": { 00:05:37.929 "small_cache_size": 128, 00:05:37.929 "large_cache_size": 16, 00:05:37.929 "task_count": 2048, 00:05:37.929 "sequence_count": 2048, 00:05:37.929 "buf_count": 2048 00:05:37.929 } 00:05:37.929 } 00:05:37.929 ] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "bdev", 00:05:37.929 "config": [ 00:05:37.929 { 00:05:37.929 "method": "bdev_set_options", 00:05:37.929 "params": { 00:05:37.929 "bdev_io_pool_size": 65535, 00:05:37.929 "bdev_io_cache_size": 256, 00:05:37.929 "bdev_auto_examine": true, 00:05:37.929 "iobuf_small_cache_size": 128, 00:05:37.929 "iobuf_large_cache_size": 16 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "bdev_raid_set_options", 00:05:37.929 "params": { 00:05:37.929 "process_window_size_kb": 1024, 00:05:37.929 "process_max_bandwidth_mb_sec": 0 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "bdev_iscsi_set_options", 00:05:37.929 "params": { 00:05:37.929 "timeout_sec": 30 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "bdev_nvme_set_options", 00:05:37.929 "params": { 00:05:37.929 "action_on_timeout": "none", 00:05:37.929 "timeout_us": 0, 00:05:37.929 "timeout_admin_us": 0, 00:05:37.929 "keep_alive_timeout_ms": 10000, 00:05:37.929 "arbitration_burst": 0, 00:05:37.929 "low_priority_weight": 0, 00:05:37.929 "medium_priority_weight": 0, 00:05:37.929 "high_priority_weight": 0, 00:05:37.929 "nvme_adminq_poll_period_us": 10000, 00:05:37.929 "nvme_ioq_poll_period_us": 0, 00:05:37.929 "io_queue_requests": 0, 00:05:37.929 "delay_cmd_submit": true, 00:05:37.929 "transport_retry_count": 4, 00:05:37.929 "bdev_retry_count": 3, 00:05:37.929 "transport_ack_timeout": 0, 00:05:37.929 "ctrlr_loss_timeout_sec": 0, 00:05:37.929 "reconnect_delay_sec": 0, 00:05:37.929 "fast_io_fail_timeout_sec": 0, 00:05:37.929 "disable_auto_failback": false, 00:05:37.929 "generate_uuids": false, 00:05:37.929 "transport_tos": 0, 00:05:37.929 "nvme_error_stat": false, 00:05:37.929 "rdma_srq_size": 0, 00:05:37.929 "io_path_stat": false, 00:05:37.929 "allow_accel_sequence": false, 00:05:37.929 "rdma_max_cq_size": 0, 00:05:37.929 "rdma_cm_event_timeout_ms": 0, 00:05:37.929 "dhchap_digests": [ 00:05:37.929 "sha256", 00:05:37.929 "sha384", 00:05:37.929 "sha512" 00:05:37.929 ], 00:05:37.929 "dhchap_dhgroups": [ 00:05:37.929 "null", 00:05:37.929 "ffdhe2048", 00:05:37.929 "ffdhe3072", 00:05:37.929 "ffdhe4096", 00:05:37.929 "ffdhe6144", 00:05:37.929 "ffdhe8192" 00:05:37.929 ] 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "bdev_nvme_set_hotplug", 00:05:37.929 "params": { 00:05:37.929 "period_us": 100000, 00:05:37.929 "enable": false 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "bdev_wait_for_examine" 00:05:37.929 } 00:05:37.929 ] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "scsi", 00:05:37.929 "config": null 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "scheduler", 00:05:37.929 "config": [ 00:05:37.929 { 00:05:37.929 "method": "framework_set_scheduler", 00:05:37.929 "params": { 00:05:37.929 "name": "static" 00:05:37.929 } 00:05:37.929 } 00:05:37.929 ] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "vhost_scsi", 00:05:37.929 "config": [] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "vhost_blk", 00:05:37.929 "config": [] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "ublk", 00:05:37.929 "config": [] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "nbd", 00:05:37.929 "config": [] 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "subsystem": "nvmf", 00:05:37.929 "config": [ 00:05:37.929 { 00:05:37.929 "method": "nvmf_set_config", 00:05:37.929 "params": { 00:05:37.929 "discovery_filter": "match_any", 00:05:37.929 "admin_cmd_passthru": { 00:05:37.929 "identify_ctrlr": false 00:05:37.929 }, 00:05:37.929 "dhchap_digests": [ 00:05:37.929 "sha256", 00:05:37.929 "sha384", 00:05:37.929 "sha512" 00:05:37.929 ], 00:05:37.929 "dhchap_dhgroups": [ 00:05:37.929 "null", 00:05:37.929 "ffdhe2048", 00:05:37.929 "ffdhe3072", 00:05:37.929 "ffdhe4096", 00:05:37.929 "ffdhe6144", 00:05:37.929 "ffdhe8192" 00:05:37.929 ] 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "nvmf_set_max_subsystems", 00:05:37.929 "params": { 00:05:37.929 "max_subsystems": 1024 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "nvmf_set_crdt", 00:05:37.929 "params": { 00:05:37.929 "crdt1": 0, 00:05:37.929 "crdt2": 0, 00:05:37.929 "crdt3": 0 00:05:37.929 } 00:05:37.929 }, 00:05:37.929 { 00:05:37.929 "method": "nvmf_create_transport", 00:05:37.929 "params": { 00:05:37.929 "trtype": "TCP", 00:05:37.929 "max_queue_depth": 128, 00:05:37.929 "max_io_qpairs_per_ctrlr": 127, 00:05:37.930 "in_capsule_data_size": 4096, 00:05:37.930 "max_io_size": 131072, 00:05:37.930 "io_unit_size": 131072, 00:05:37.930 "max_aq_depth": 128, 00:05:37.930 "num_shared_buffers": 511, 00:05:37.930 "buf_cache_size": 4294967295, 00:05:37.930 "dif_insert_or_strip": false, 00:05:37.930 "zcopy": false, 00:05:37.930 "c2h_success": true, 00:05:37.930 "sock_priority": 0, 00:05:37.930 "abort_timeout_sec": 1, 00:05:37.930 "ack_timeout": 0, 00:05:37.930 "data_wr_pool_size": 0 00:05:37.930 } 00:05:37.930 } 00:05:37.930 ] 00:05:37.930 }, 00:05:37.930 { 00:05:37.930 "subsystem": "iscsi", 00:05:37.930 "config": [ 00:05:37.930 { 00:05:37.930 "method": "iscsi_set_options", 00:05:37.930 "params": { 00:05:37.930 "node_base": "iqn.2016-06.io.spdk", 00:05:37.930 "max_sessions": 128, 00:05:37.930 "max_connections_per_session": 2, 00:05:37.930 "max_queue_depth": 64, 00:05:37.930 "default_time2wait": 2, 00:05:37.930 "default_time2retain": 20, 00:05:37.930 "first_burst_length": 8192, 00:05:37.930 "immediate_data": true, 00:05:37.930 "allow_duplicated_isid": false, 00:05:37.930 "error_recovery_level": 0, 00:05:37.930 "nop_timeout": 60, 00:05:37.930 "nop_in_interval": 30, 00:05:37.930 "disable_chap": false, 00:05:37.930 "require_chap": false, 00:05:37.930 "mutual_chap": false, 00:05:37.930 "chap_group": 0, 00:05:37.930 "max_large_datain_per_connection": 64, 00:05:37.930 "max_r2t_per_connection": 4, 00:05:37.930 "pdu_pool_size": 36864, 00:05:37.930 "immediate_data_pool_size": 16384, 00:05:37.930 "data_out_pool_size": 2048 00:05:37.930 } 00:05:37.930 } 00:05:37.930 ] 00:05:37.930 } 00:05:37.930 ] 00:05:37.930 } 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69423 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69423 ']' 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69423 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69423 00:05:37.930 killing process with pid 69423 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69423' 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69423 00:05:37.930 23:39:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69423 00:05:38.190 23:39:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69452 00:05:38.190 23:39:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.190 23:39:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69452 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69452 ']' 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69452 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69452 00:05:43.454 killing process with pid 69452 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69452' 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69452 00:05:43.454 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69452 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.715 00:05:43.715 real 0m6.752s 00:05:43.715 user 0m6.296s 00:05:43.715 sys 0m0.689s 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.715 ************************************ 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 END TEST skip_rpc_with_json 00:05:43.715 ************************************ 00:05:43.715 23:39:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:43.715 23:39:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.715 23:39:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.715 23:39:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 ************************************ 00:05:43.715 START TEST skip_rpc_with_delay 00:05:43.715 ************************************ 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.715 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.716 [2024-11-26 23:39:31.762903] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.716 00:05:43.716 real 0m0.126s 00:05:43.716 user 0m0.062s 00:05:43.716 sys 0m0.061s 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.716 23:39:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:43.716 ************************************ 00:05:43.716 END TEST skip_rpc_with_delay 00:05:43.716 ************************************ 00:05:43.977 23:39:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.977 23:39:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.977 23:39:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.977 23:39:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.977 23:39:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.977 23:39:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.977 ************************************ 00:05:43.977 START TEST exit_on_failed_rpc_init 00:05:43.977 ************************************ 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69563 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69563 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69563 ']' 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.977 23:39:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.977 [2024-11-26 23:39:31.963069] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:43.977 [2024-11-26 23:39:31.963235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69563 ] 00:05:43.977 [2024-11-26 23:39:32.103999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.238 [2024-11-26 23:39:32.132139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:44.812 23:39:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.812 [2024-11-26 23:39:32.913693] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:44.812 [2024-11-26 23:39:32.913875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69581 ] 00:05:45.073 [2024-11-26 23:39:33.061435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.073 [2024-11-26 23:39:33.102230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.073 [2024-11-26 23:39:33.102340] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:45.073 [2024-11-26 23:39:33.102363] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:45.073 [2024-11-26 23:39:33.102374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69563 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69563 ']' 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69563 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69563 00:05:45.335 killing process with pid 69563 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69563' 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69563 00:05:45.335 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69563 00:05:45.907 ************************************ 00:05:45.907 END TEST exit_on_failed_rpc_init 00:05:45.907 ************************************ 00:05:45.907 00:05:45.907 real 0m1.860s 00:05:45.907 user 0m1.985s 00:05:45.907 sys 0m0.485s 00:05:45.907 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.907 23:39:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.907 23:39:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:45.907 ************************************ 00:05:45.907 END TEST skip_rpc 00:05:45.907 ************************************ 00:05:45.907 00:05:45.907 real 0m14.464s 00:05:45.907 user 0m13.471s 00:05:45.907 sys 0m1.672s 00:05:45.907 23:39:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.907 23:39:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.907 23:39:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.908 23:39:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.908 23:39:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.908 23:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:45.908 ************************************ 00:05:45.908 START TEST rpc_client 00:05:45.908 ************************************ 00:05:45.908 23:39:33 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.908 * Looking for test storage... 00:05:45.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:45.908 23:39:33 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.908 23:39:33 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.908 23:39:33 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.908 23:39:33 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:45.908 23:39:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.908 23:39:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.908 23:39:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.908 23:39:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:45.908 23:39:34 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.908 23:39:34 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.908 --rc genhtml_branch_coverage=1 00:05:45.908 --rc genhtml_function_coverage=1 00:05:45.908 --rc genhtml_legend=1 00:05:45.908 --rc geninfo_all_blocks=1 00:05:45.908 --rc geninfo_unexecuted_blocks=1 00:05:45.908 00:05:45.908 ' 00:05:45.908 23:39:34 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.908 --rc genhtml_branch_coverage=1 00:05:45.908 --rc genhtml_function_coverage=1 00:05:45.908 --rc genhtml_legend=1 00:05:45.908 --rc geninfo_all_blocks=1 00:05:45.908 --rc geninfo_unexecuted_blocks=1 00:05:45.908 00:05:45.908 ' 00:05:45.908 23:39:34 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.908 --rc genhtml_branch_coverage=1 00:05:45.908 --rc genhtml_function_coverage=1 00:05:45.908 --rc genhtml_legend=1 00:05:45.908 --rc geninfo_all_blocks=1 00:05:45.908 --rc geninfo_unexecuted_blocks=1 00:05:45.908 00:05:45.908 ' 00:05:45.908 23:39:34 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.908 --rc genhtml_branch_coverage=1 00:05:45.908 --rc genhtml_function_coverage=1 00:05:45.908 --rc genhtml_legend=1 00:05:45.908 --rc geninfo_all_blocks=1 00:05:45.908 --rc geninfo_unexecuted_blocks=1 00:05:45.908 00:05:45.908 ' 00:05:45.908 23:39:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:45.908 OK 00:05:46.170 23:39:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.170 00:05:46.170 real 0m0.198s 00:05:46.170 user 0m0.103s 00:05:46.170 sys 0m0.094s 00:05:46.170 23:39:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.170 23:39:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:46.170 ************************************ 00:05:46.170 END TEST rpc_client 00:05:46.170 ************************************ 00:05:46.170 23:39:34 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.170 23:39:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.170 23:39:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.170 23:39:34 -- common/autotest_common.sh@10 -- # set +x 00:05:46.170 ************************************ 00:05:46.170 START TEST json_config 00:05:46.170 ************************************ 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.171 23:39:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.171 23:39:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.171 23:39:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.171 23:39:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.171 23:39:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.171 23:39:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.171 23:39:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.171 23:39:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.171 23:39:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.171 23:39:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.171 23:39:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.171 23:39:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:46.171 23:39:34 json_config -- scripts/common.sh@345 -- # : 1 00:05:46.171 23:39:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.171 23:39:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.171 23:39:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:46.171 23:39:34 json_config -- scripts/common.sh@353 -- # local d=1 00:05:46.171 23:39:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.171 23:39:34 json_config -- scripts/common.sh@355 -- # echo 1 00:05:46.171 23:39:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.171 23:39:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:46.171 23:39:34 json_config -- scripts/common.sh@353 -- # local d=2 00:05:46.171 23:39:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.171 23:39:34 json_config -- scripts/common.sh@355 -- # echo 2 00:05:46.171 23:39:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.171 23:39:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.171 23:39:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.171 23:39:34 json_config -- scripts/common.sh@368 -- # return 0 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.171 --rc genhtml_branch_coverage=1 00:05:46.171 --rc genhtml_function_coverage=1 00:05:46.171 --rc genhtml_legend=1 00:05:46.171 --rc geninfo_all_blocks=1 00:05:46.171 --rc geninfo_unexecuted_blocks=1 00:05:46.171 00:05:46.171 ' 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.171 --rc genhtml_branch_coverage=1 00:05:46.171 --rc genhtml_function_coverage=1 00:05:46.171 --rc genhtml_legend=1 00:05:46.171 --rc geninfo_all_blocks=1 00:05:46.171 --rc geninfo_unexecuted_blocks=1 00:05:46.171 00:05:46.171 ' 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.171 --rc genhtml_branch_coverage=1 00:05:46.171 --rc genhtml_function_coverage=1 00:05:46.171 --rc genhtml_legend=1 00:05:46.171 --rc geninfo_all_blocks=1 00:05:46.171 --rc geninfo_unexecuted_blocks=1 00:05:46.171 00:05:46.171 ' 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.171 --rc genhtml_branch_coverage=1 00:05:46.171 --rc genhtml_function_coverage=1 00:05:46.171 --rc genhtml_legend=1 00:05:46.171 --rc geninfo_all_blocks=1 00:05:46.171 --rc geninfo_unexecuted_blocks=1 00:05:46.171 00:05:46.171 ' 00:05:46.171 23:39:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3185115-c870-4590-bddf-ec6cbfb7ca82 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a3185115-c870-4590-bddf-ec6cbfb7ca82 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.171 23:39:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.171 23:39:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.171 23:39:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.171 23:39:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.171 23:39:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.171 23:39:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.171 23:39:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.171 23:39:34 json_config -- paths/export.sh@5 -- # export PATH 00:05:46.171 23:39:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@51 -- # : 0 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.171 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.171 23:39:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.171 23:39:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.171 23:39:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:46.171 23:39:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:46.171 23:39:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:46.171 23:39:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.171 WARNING: No tests are enabled so not running JSON configuration tests 00:05:46.171 23:39:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:46.171 23:39:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:46.171 00:05:46.171 real 0m0.144s 00:05:46.171 user 0m0.089s 00:05:46.171 sys 0m0.054s 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.171 23:39:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.171 ************************************ 00:05:46.171 END TEST json_config 00:05:46.171 ************************************ 00:05:46.171 23:39:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.171 23:39:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.171 23:39:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.171 23:39:34 -- common/autotest_common.sh@10 -- # set +x 00:05:46.434 ************************************ 00:05:46.434 START TEST json_config_extra_key 00:05:46.434 ************************************ 00:05:46.434 23:39:34 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.434 23:39:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.434 23:39:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.434 23:39:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.434 23:39:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.434 23:39:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.434 23:39:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.434 23:39:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.434 23:39:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:46.435 23:39:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.435 23:39:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.435 --rc genhtml_branch_coverage=1 00:05:46.435 --rc genhtml_function_coverage=1 00:05:46.435 --rc genhtml_legend=1 00:05:46.435 --rc geninfo_all_blocks=1 00:05:46.435 --rc geninfo_unexecuted_blocks=1 00:05:46.435 00:05:46.435 ' 00:05:46.435 23:39:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.435 --rc genhtml_branch_coverage=1 00:05:46.435 --rc genhtml_function_coverage=1 00:05:46.435 --rc genhtml_legend=1 00:05:46.435 --rc geninfo_all_blocks=1 00:05:46.435 --rc geninfo_unexecuted_blocks=1 00:05:46.435 00:05:46.435 ' 00:05:46.435 23:39:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.435 --rc genhtml_branch_coverage=1 00:05:46.435 --rc genhtml_function_coverage=1 00:05:46.435 --rc genhtml_legend=1 00:05:46.435 --rc geninfo_all_blocks=1 00:05:46.435 --rc geninfo_unexecuted_blocks=1 00:05:46.435 00:05:46.435 ' 00:05:46.435 23:39:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.435 --rc genhtml_branch_coverage=1 00:05:46.435 --rc genhtml_function_coverage=1 00:05:46.435 --rc genhtml_legend=1 00:05:46.435 --rc geninfo_all_blocks=1 00:05:46.435 --rc geninfo_unexecuted_blocks=1 00:05:46.435 00:05:46.435 ' 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3185115-c870-4590-bddf-ec6cbfb7ca82 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a3185115-c870-4590-bddf-ec6cbfb7ca82 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.435 23:39:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.435 23:39:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.435 23:39:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.435 23:39:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.435 23:39:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:46.435 23:39:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.435 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.435 23:39:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.435 INFO: launching applications... 00:05:46.435 23:39:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69764 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:46.435 Waiting for target to run... 00:05:46.435 23:39:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.436 23:39:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69764 /var/tmp/spdk_tgt.sock 00:05:46.436 23:39:34 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69764 ']' 00:05:46.436 23:39:34 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.436 23:39:34 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.436 23:39:34 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.436 23:39:34 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.436 23:39:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:46.436 [2024-11-26 23:39:34.527470] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:46.436 [2024-11-26 23:39:34.527775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69764 ] 00:05:47.008 [2024-11-26 23:39:34.881647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.008 [2024-11-26 23:39:34.895891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.268 00:05:47.268 INFO: shutting down applications... 00:05:47.268 23:39:35 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.268 23:39:35 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:47.268 23:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:47.268 23:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69764 ]] 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69764 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69764 00:05:47.268 23:39:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.839 23:39:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.839 23:39:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.839 23:39:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69764 00:05:47.839 23:39:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:47.839 23:39:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:47.839 23:39:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:47.839 23:39:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:47.839 SPDK target shutdown done 00:05:47.839 23:39:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:47.839 Success 00:05:47.839 00:05:47.839 real 0m1.566s 00:05:47.839 user 0m1.278s 00:05:47.839 sys 0m0.400s 00:05:47.839 23:39:35 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.839 23:39:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.839 ************************************ 00:05:47.839 END TEST json_config_extra_key 00:05:47.839 ************************************ 00:05:47.839 23:39:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.839 23:39:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.839 23:39:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.839 23:39:35 -- common/autotest_common.sh@10 -- # set +x 00:05:47.839 ************************************ 00:05:47.839 START TEST alias_rpc 00:05:47.839 ************************************ 00:05:47.839 23:39:35 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.113 * Looking for test storage... 00:05:48.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:48.114 23:39:35 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:48.114 23:39:36 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:48.114 23:39:36 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.114 23:39:36 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.114 23:39:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.115 23:39:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.115 23:39:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.115 23:39:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.115 23:39:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.115 23:39:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.115 23:39:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:48.115 23:39:36 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.115 23:39:36 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.115 --rc genhtml_branch_coverage=1 00:05:48.115 --rc genhtml_function_coverage=1 00:05:48.115 --rc genhtml_legend=1 00:05:48.115 --rc geninfo_all_blocks=1 00:05:48.115 --rc geninfo_unexecuted_blocks=1 00:05:48.115 00:05:48.115 ' 00:05:48.115 23:39:36 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.115 --rc genhtml_branch_coverage=1 00:05:48.115 --rc genhtml_function_coverage=1 00:05:48.115 --rc genhtml_legend=1 00:05:48.115 --rc geninfo_all_blocks=1 00:05:48.115 --rc geninfo_unexecuted_blocks=1 00:05:48.115 00:05:48.115 ' 00:05:48.115 23:39:36 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.115 --rc genhtml_branch_coverage=1 00:05:48.115 --rc genhtml_function_coverage=1 00:05:48.115 --rc genhtml_legend=1 00:05:48.115 --rc geninfo_all_blocks=1 00:05:48.115 --rc geninfo_unexecuted_blocks=1 00:05:48.115 00:05:48.115 ' 00:05:48.115 23:39:36 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.115 --rc genhtml_branch_coverage=1 00:05:48.115 --rc genhtml_function_coverage=1 00:05:48.116 --rc genhtml_legend=1 00:05:48.116 --rc geninfo_all_blocks=1 00:05:48.116 --rc geninfo_unexecuted_blocks=1 00:05:48.116 00:05:48.116 ' 00:05:48.116 23:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.116 23:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69837 00:05:48.116 23:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69837 00:05:48.116 23:39:36 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69837 ']' 00:05:48.116 23:39:36 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.116 23:39:36 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.116 23:39:36 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.116 23:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.116 23:39:36 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.116 23:39:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.116 [2024-11-26 23:39:36.158056] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:48.116 [2024-11-26 23:39:36.158375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69837 ] 00:05:48.383 [2024-11-26 23:39:36.305975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.383 [2024-11-26 23:39:36.330422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.954 23:39:36 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.954 23:39:36 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.954 23:39:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:49.215 23:39:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69837 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69837 ']' 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69837 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69837 00:05:49.215 killing process with pid 69837 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69837' 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 69837 00:05:49.215 23:39:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 69837 00:05:49.480 ************************************ 00:05:49.480 END TEST alias_rpc 00:05:49.480 ************************************ 00:05:49.480 00:05:49.480 real 0m1.612s 00:05:49.480 user 0m1.674s 00:05:49.480 sys 0m0.429s 00:05:49.480 23:39:37 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.480 23:39:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.480 23:39:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:49.480 23:39:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.480 23:39:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.480 23:39:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.480 23:39:37 -- common/autotest_common.sh@10 -- # set +x 00:05:49.480 ************************************ 00:05:49.480 START TEST spdkcli_tcp 00:05:49.480 ************************************ 00:05:49.480 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.741 * Looking for test storage... 00:05:49.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.741 23:39:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.741 --rc genhtml_branch_coverage=1 00:05:49.741 --rc genhtml_function_coverage=1 00:05:49.741 --rc genhtml_legend=1 00:05:49.741 --rc geninfo_all_blocks=1 00:05:49.741 --rc geninfo_unexecuted_blocks=1 00:05:49.741 00:05:49.741 ' 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.741 --rc genhtml_branch_coverage=1 00:05:49.741 --rc genhtml_function_coverage=1 00:05:49.741 --rc genhtml_legend=1 00:05:49.741 --rc geninfo_all_blocks=1 00:05:49.741 --rc geninfo_unexecuted_blocks=1 00:05:49.741 00:05:49.741 ' 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.741 --rc genhtml_branch_coverage=1 00:05:49.741 --rc genhtml_function_coverage=1 00:05:49.741 --rc genhtml_legend=1 00:05:49.741 --rc geninfo_all_blocks=1 00:05:49.741 --rc geninfo_unexecuted_blocks=1 00:05:49.741 00:05:49.741 ' 00:05:49.741 23:39:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.741 --rc genhtml_branch_coverage=1 00:05:49.741 --rc genhtml_function_coverage=1 00:05:49.741 --rc genhtml_legend=1 00:05:49.741 --rc geninfo_all_blocks=1 00:05:49.741 --rc geninfo_unexecuted_blocks=1 00:05:49.741 00:05:49.741 ' 00:05:49.741 23:39:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:49.741 23:39:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:49.742 23:39:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:49.742 23:39:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.742 23:39:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.742 23:39:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.742 23:39:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.742 23:39:37 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.742 23:39:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.742 23:39:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69917 00:05:49.742 23:39:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69917 00:05:49.742 23:39:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69917 ']' 00:05:49.742 23:39:37 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.742 23:39:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.742 23:39:37 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.742 23:39:37 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.742 23:39:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.742 23:39:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.742 [2024-11-26 23:39:37.822291] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:49.742 [2024-11-26 23:39:37.822418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69917 ] 00:05:50.002 [2024-11-26 23:39:37.966982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.002 [2024-11-26 23:39:37.993556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.002 [2024-11-26 23:39:37.993635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.573 23:39:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.573 23:39:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:50.573 23:39:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69934 00:05:50.573 23:39:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.573 23:39:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.835 [ 00:05:50.835 "bdev_malloc_delete", 00:05:50.835 "bdev_malloc_create", 00:05:50.835 "bdev_null_resize", 00:05:50.835 "bdev_null_delete", 00:05:50.835 "bdev_null_create", 00:05:50.835 "bdev_nvme_cuse_unregister", 00:05:50.835 "bdev_nvme_cuse_register", 00:05:50.835 "bdev_opal_new_user", 00:05:50.835 "bdev_opal_set_lock_state", 00:05:50.835 "bdev_opal_delete", 00:05:50.835 "bdev_opal_get_info", 00:05:50.835 "bdev_opal_create", 00:05:50.835 "bdev_nvme_opal_revert", 00:05:50.835 "bdev_nvme_opal_init", 00:05:50.835 "bdev_nvme_send_cmd", 00:05:50.835 "bdev_nvme_set_keys", 00:05:50.835 "bdev_nvme_get_path_iostat", 00:05:50.835 "bdev_nvme_get_mdns_discovery_info", 00:05:50.835 "bdev_nvme_stop_mdns_discovery", 00:05:50.835 "bdev_nvme_start_mdns_discovery", 00:05:50.835 "bdev_nvme_set_multipath_policy", 00:05:50.835 "bdev_nvme_set_preferred_path", 00:05:50.835 "bdev_nvme_get_io_paths", 00:05:50.835 "bdev_nvme_remove_error_injection", 00:05:50.835 "bdev_nvme_add_error_injection", 00:05:50.835 "bdev_nvme_get_discovery_info", 00:05:50.835 "bdev_nvme_stop_discovery", 00:05:50.835 "bdev_nvme_start_discovery", 00:05:50.835 "bdev_nvme_get_controller_health_info", 00:05:50.835 "bdev_nvme_disable_controller", 00:05:50.835 "bdev_nvme_enable_controller", 00:05:50.835 "bdev_nvme_reset_controller", 00:05:50.835 "bdev_nvme_get_transport_statistics", 00:05:50.835 "bdev_nvme_apply_firmware", 00:05:50.835 "bdev_nvme_detach_controller", 00:05:50.835 "bdev_nvme_get_controllers", 00:05:50.835 "bdev_nvme_attach_controller", 00:05:50.835 "bdev_nvme_set_hotplug", 00:05:50.835 "bdev_nvme_set_options", 00:05:50.835 "bdev_passthru_delete", 00:05:50.835 "bdev_passthru_create", 00:05:50.835 "bdev_lvol_set_parent_bdev", 00:05:50.835 "bdev_lvol_set_parent", 00:05:50.835 "bdev_lvol_check_shallow_copy", 00:05:50.835 "bdev_lvol_start_shallow_copy", 00:05:50.835 "bdev_lvol_grow_lvstore", 00:05:50.835 "bdev_lvol_get_lvols", 00:05:50.835 "bdev_lvol_get_lvstores", 00:05:50.835 "bdev_lvol_delete", 00:05:50.835 "bdev_lvol_set_read_only", 00:05:50.835 "bdev_lvol_resize", 00:05:50.835 "bdev_lvol_decouple_parent", 00:05:50.835 "bdev_lvol_inflate", 00:05:50.835 "bdev_lvol_rename", 00:05:50.835 "bdev_lvol_clone_bdev", 00:05:50.835 "bdev_lvol_clone", 00:05:50.835 "bdev_lvol_snapshot", 00:05:50.835 "bdev_lvol_create", 00:05:50.835 "bdev_lvol_delete_lvstore", 00:05:50.835 "bdev_lvol_rename_lvstore", 00:05:50.835 "bdev_lvol_create_lvstore", 00:05:50.835 "bdev_raid_set_options", 00:05:50.835 "bdev_raid_remove_base_bdev", 00:05:50.835 "bdev_raid_add_base_bdev", 00:05:50.835 "bdev_raid_delete", 00:05:50.835 "bdev_raid_create", 00:05:50.835 "bdev_raid_get_bdevs", 00:05:50.835 "bdev_error_inject_error", 00:05:50.835 "bdev_error_delete", 00:05:50.835 "bdev_error_create", 00:05:50.835 "bdev_split_delete", 00:05:50.835 "bdev_split_create", 00:05:50.835 "bdev_delay_delete", 00:05:50.835 "bdev_delay_create", 00:05:50.835 "bdev_delay_update_latency", 00:05:50.835 "bdev_zone_block_delete", 00:05:50.835 "bdev_zone_block_create", 00:05:50.835 "blobfs_create", 00:05:50.835 "blobfs_detect", 00:05:50.835 "blobfs_set_cache_size", 00:05:50.835 "bdev_xnvme_delete", 00:05:50.835 "bdev_xnvme_create", 00:05:50.835 "bdev_aio_delete", 00:05:50.835 "bdev_aio_rescan", 00:05:50.835 "bdev_aio_create", 00:05:50.835 "bdev_ftl_set_property", 00:05:50.835 "bdev_ftl_get_properties", 00:05:50.835 "bdev_ftl_get_stats", 00:05:50.835 "bdev_ftl_unmap", 00:05:50.835 "bdev_ftl_unload", 00:05:50.835 "bdev_ftl_delete", 00:05:50.835 "bdev_ftl_load", 00:05:50.835 "bdev_ftl_create", 00:05:50.835 "bdev_virtio_attach_controller", 00:05:50.835 "bdev_virtio_scsi_get_devices", 00:05:50.835 "bdev_virtio_detach_controller", 00:05:50.836 "bdev_virtio_blk_set_hotplug", 00:05:50.836 "bdev_iscsi_delete", 00:05:50.836 "bdev_iscsi_create", 00:05:50.836 "bdev_iscsi_set_options", 00:05:50.836 "accel_error_inject_error", 00:05:50.836 "ioat_scan_accel_module", 00:05:50.836 "dsa_scan_accel_module", 00:05:50.836 "iaa_scan_accel_module", 00:05:50.836 "keyring_file_remove_key", 00:05:50.836 "keyring_file_add_key", 00:05:50.836 "keyring_linux_set_options", 00:05:50.836 "fsdev_aio_delete", 00:05:50.836 "fsdev_aio_create", 00:05:50.836 "iscsi_get_histogram", 00:05:50.836 "iscsi_enable_histogram", 00:05:50.836 "iscsi_set_options", 00:05:50.836 "iscsi_get_auth_groups", 00:05:50.836 "iscsi_auth_group_remove_secret", 00:05:50.836 "iscsi_auth_group_add_secret", 00:05:50.836 "iscsi_delete_auth_group", 00:05:50.836 "iscsi_create_auth_group", 00:05:50.836 "iscsi_set_discovery_auth", 00:05:50.836 "iscsi_get_options", 00:05:50.836 "iscsi_target_node_request_logout", 00:05:50.836 "iscsi_target_node_set_redirect", 00:05:50.836 "iscsi_target_node_set_auth", 00:05:50.836 "iscsi_target_node_add_lun", 00:05:50.836 "iscsi_get_stats", 00:05:50.836 "iscsi_get_connections", 00:05:50.836 "iscsi_portal_group_set_auth", 00:05:50.836 "iscsi_start_portal_group", 00:05:50.836 "iscsi_delete_portal_group", 00:05:50.836 "iscsi_create_portal_group", 00:05:50.836 "iscsi_get_portal_groups", 00:05:50.836 "iscsi_delete_target_node", 00:05:50.836 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.836 "iscsi_target_node_add_pg_ig_maps", 00:05:50.836 "iscsi_create_target_node", 00:05:50.836 "iscsi_get_target_nodes", 00:05:50.836 "iscsi_delete_initiator_group", 00:05:50.836 "iscsi_initiator_group_remove_initiators", 00:05:50.836 "iscsi_initiator_group_add_initiators", 00:05:50.836 "iscsi_create_initiator_group", 00:05:50.836 "iscsi_get_initiator_groups", 00:05:50.836 "nvmf_set_crdt", 00:05:50.836 "nvmf_set_config", 00:05:50.836 "nvmf_set_max_subsystems", 00:05:50.836 "nvmf_stop_mdns_prr", 00:05:50.836 "nvmf_publish_mdns_prr", 00:05:50.836 "nvmf_subsystem_get_listeners", 00:05:50.836 "nvmf_subsystem_get_qpairs", 00:05:50.836 "nvmf_subsystem_get_controllers", 00:05:50.836 "nvmf_get_stats", 00:05:50.836 "nvmf_get_transports", 00:05:50.836 "nvmf_create_transport", 00:05:50.836 "nvmf_get_targets", 00:05:50.836 "nvmf_delete_target", 00:05:50.836 "nvmf_create_target", 00:05:50.836 "nvmf_subsystem_allow_any_host", 00:05:50.836 "nvmf_subsystem_set_keys", 00:05:50.836 "nvmf_subsystem_remove_host", 00:05:50.836 "nvmf_subsystem_add_host", 00:05:50.836 "nvmf_ns_remove_host", 00:05:50.836 "nvmf_ns_add_host", 00:05:50.836 "nvmf_subsystem_remove_ns", 00:05:50.836 "nvmf_subsystem_set_ns_ana_group", 00:05:50.836 "nvmf_subsystem_add_ns", 00:05:50.836 "nvmf_subsystem_listener_set_ana_state", 00:05:50.836 "nvmf_discovery_get_referrals", 00:05:50.836 "nvmf_discovery_remove_referral", 00:05:50.836 "nvmf_discovery_add_referral", 00:05:50.836 "nvmf_subsystem_remove_listener", 00:05:50.836 "nvmf_subsystem_add_listener", 00:05:50.836 "nvmf_delete_subsystem", 00:05:50.836 "nvmf_create_subsystem", 00:05:50.836 "nvmf_get_subsystems", 00:05:50.836 "env_dpdk_get_mem_stats", 00:05:50.836 "nbd_get_disks", 00:05:50.836 "nbd_stop_disk", 00:05:50.836 "nbd_start_disk", 00:05:50.836 "ublk_recover_disk", 00:05:50.836 "ublk_get_disks", 00:05:50.836 "ublk_stop_disk", 00:05:50.836 "ublk_start_disk", 00:05:50.836 "ublk_destroy_target", 00:05:50.836 "ublk_create_target", 00:05:50.836 "virtio_blk_create_transport", 00:05:50.836 "virtio_blk_get_transports", 00:05:50.836 "vhost_controller_set_coalescing", 00:05:50.836 "vhost_get_controllers", 00:05:50.836 "vhost_delete_controller", 00:05:50.836 "vhost_create_blk_controller", 00:05:50.836 "vhost_scsi_controller_remove_target", 00:05:50.836 "vhost_scsi_controller_add_target", 00:05:50.836 "vhost_start_scsi_controller", 00:05:50.836 "vhost_create_scsi_controller", 00:05:50.836 "thread_set_cpumask", 00:05:50.836 "scheduler_set_options", 00:05:50.836 "framework_get_governor", 00:05:50.836 "framework_get_scheduler", 00:05:50.836 "framework_set_scheduler", 00:05:50.836 "framework_get_reactors", 00:05:50.836 "thread_get_io_channels", 00:05:50.836 "thread_get_pollers", 00:05:50.836 "thread_get_stats", 00:05:50.836 "framework_monitor_context_switch", 00:05:50.836 "spdk_kill_instance", 00:05:50.836 "log_enable_timestamps", 00:05:50.836 "log_get_flags", 00:05:50.836 "log_clear_flag", 00:05:50.836 "log_set_flag", 00:05:50.836 "log_get_level", 00:05:50.836 "log_set_level", 00:05:50.836 "log_get_print_level", 00:05:50.836 "log_set_print_level", 00:05:50.836 "framework_enable_cpumask_locks", 00:05:50.836 "framework_disable_cpumask_locks", 00:05:50.836 "framework_wait_init", 00:05:50.836 "framework_start_init", 00:05:50.836 "scsi_get_devices", 00:05:50.836 "bdev_get_histogram", 00:05:50.836 "bdev_enable_histogram", 00:05:50.836 "bdev_set_qos_limit", 00:05:50.836 "bdev_set_qd_sampling_period", 00:05:50.836 "bdev_get_bdevs", 00:05:50.836 "bdev_reset_iostat", 00:05:50.836 "bdev_get_iostat", 00:05:50.836 "bdev_examine", 00:05:50.836 "bdev_wait_for_examine", 00:05:50.836 "bdev_set_options", 00:05:50.836 "accel_get_stats", 00:05:50.836 "accel_set_options", 00:05:50.836 "accel_set_driver", 00:05:50.836 "accel_crypto_key_destroy", 00:05:50.836 "accel_crypto_keys_get", 00:05:50.836 "accel_crypto_key_create", 00:05:50.836 "accel_assign_opc", 00:05:50.836 "accel_get_module_info", 00:05:50.836 "accel_get_opc_assignments", 00:05:50.836 "vmd_rescan", 00:05:50.836 "vmd_remove_device", 00:05:50.836 "vmd_enable", 00:05:50.836 "sock_get_default_impl", 00:05:50.836 "sock_set_default_impl", 00:05:50.836 "sock_impl_set_options", 00:05:50.836 "sock_impl_get_options", 00:05:50.836 "iobuf_get_stats", 00:05:50.836 "iobuf_set_options", 00:05:50.836 "keyring_get_keys", 00:05:50.836 "framework_get_pci_devices", 00:05:50.836 "framework_get_config", 00:05:50.836 "framework_get_subsystems", 00:05:50.836 "fsdev_set_opts", 00:05:50.836 "fsdev_get_opts", 00:05:50.836 "trace_get_info", 00:05:50.836 "trace_get_tpoint_group_mask", 00:05:50.836 "trace_disable_tpoint_group", 00:05:50.836 "trace_enable_tpoint_group", 00:05:50.836 "trace_clear_tpoint_mask", 00:05:50.836 "trace_set_tpoint_mask", 00:05:50.836 "notify_get_notifications", 00:05:50.836 "notify_get_types", 00:05:50.836 "spdk_get_version", 00:05:50.836 "rpc_get_methods" 00:05:50.836 ] 00:05:50.836 23:39:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.836 23:39:38 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.836 23:39:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.836 23:39:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.836 23:39:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69917 00:05:50.836 23:39:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69917 ']' 00:05:50.836 23:39:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69917 00:05:50.836 23:39:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:51.098 23:39:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.098 23:39:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69917 00:05:51.098 killing process with pid 69917 00:05:51.098 23:39:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.098 23:39:38 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.098 23:39:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69917' 00:05:51.098 23:39:38 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69917 00:05:51.098 23:39:38 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69917 00:05:51.678 ************************************ 00:05:51.678 END TEST spdkcli_tcp 00:05:51.678 ************************************ 00:05:51.678 00:05:51.678 real 0m1.898s 00:05:51.678 user 0m3.354s 00:05:51.678 sys 0m0.517s 00:05:51.678 23:39:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.678 23:39:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.678 23:39:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.678 23:39:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.678 23:39:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.678 23:39:39 -- common/autotest_common.sh@10 -- # set +x 00:05:51.678 ************************************ 00:05:51.678 START TEST dpdk_mem_utility 00:05:51.678 ************************************ 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.678 * Looking for test storage... 00:05:51.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.678 23:39:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.678 --rc genhtml_branch_coverage=1 00:05:51.678 --rc genhtml_function_coverage=1 00:05:51.678 --rc genhtml_legend=1 00:05:51.678 --rc geninfo_all_blocks=1 00:05:51.678 --rc geninfo_unexecuted_blocks=1 00:05:51.678 00:05:51.678 ' 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.678 --rc genhtml_branch_coverage=1 00:05:51.678 --rc genhtml_function_coverage=1 00:05:51.678 --rc genhtml_legend=1 00:05:51.678 --rc geninfo_all_blocks=1 00:05:51.678 --rc geninfo_unexecuted_blocks=1 00:05:51.678 00:05:51.678 ' 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.678 --rc genhtml_branch_coverage=1 00:05:51.678 --rc genhtml_function_coverage=1 00:05:51.678 --rc genhtml_legend=1 00:05:51.678 --rc geninfo_all_blocks=1 00:05:51.678 --rc geninfo_unexecuted_blocks=1 00:05:51.678 00:05:51.678 ' 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.678 --rc genhtml_branch_coverage=1 00:05:51.678 --rc genhtml_function_coverage=1 00:05:51.678 --rc genhtml_legend=1 00:05:51.678 --rc geninfo_all_blocks=1 00:05:51.678 --rc geninfo_unexecuted_blocks=1 00:05:51.678 00:05:51.678 ' 00:05:51.678 23:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:51.678 23:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70017 00:05:51.678 23:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70017 00:05:51.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70017 ']' 00:05:51.678 23:39:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.678 23:39:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.678 [2024-11-26 23:39:39.780212] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:51.678 [2024-11-26 23:39:39.780344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70017 ] 00:05:51.939 [2024-11-26 23:39:39.925368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.939 [2024-11-26 23:39:39.950540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.511 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.511 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:52.511 23:39:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.511 23:39:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.511 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.511 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.511 { 00:05:52.511 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.511 } 00:05:52.511 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.511 23:39:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:52.775 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:52.775 1 heaps totaling size 818.000000 MiB 00:05:52.775 size: 818.000000 MiB heap id: 0 00:05:52.775 end heaps---------- 00:05:52.775 9 mempools totaling size 603.782043 MiB 00:05:52.775 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.775 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.775 size: 100.555481 MiB name: bdev_io_70017 00:05:52.775 size: 50.003479 MiB name: msgpool_70017 00:05:52.775 size: 36.509338 MiB name: fsdev_io_70017 00:05:52.775 size: 21.763794 MiB name: PDU_Pool 00:05:52.775 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.775 size: 4.133484 MiB name: evtpool_70017 00:05:52.775 size: 0.026123 MiB name: Session_Pool 00:05:52.775 end mempools------- 00:05:52.775 6 memzones totaling size 4.142822 MiB 00:05:52.775 size: 1.000366 MiB name: RG_ring_0_70017 00:05:52.775 size: 1.000366 MiB name: RG_ring_1_70017 00:05:52.775 size: 1.000366 MiB name: RG_ring_4_70017 00:05:52.775 size: 1.000366 MiB name: RG_ring_5_70017 00:05:52.775 size: 0.125366 MiB name: RG_ring_2_70017 00:05:52.775 size: 0.015991 MiB name: RG_ring_3_70017 00:05:52.775 end memzones------- 00:05:52.775 23:39:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:52.775 heap id: 0 total size: 818.000000 MiB number of busy elements: 309 number of free elements: 15 00:05:52.775 list of free elements. size: 10.803955 MiB 00:05:52.775 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:52.775 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:52.775 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:52.775 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:52.775 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:52.775 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:52.775 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:52.775 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:52.775 element at address: 0x20001ae00000 with size: 0.569153 MiB 00:05:52.775 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:52.775 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:52.775 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:52.775 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:52.775 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:52.775 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:52.775 list of standard malloc elements. size: 199.267151 MiB 00:05:52.775 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:52.775 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:52.775 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:52.775 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:52.775 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:52.775 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:52.775 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:52.775 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:52.775 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:52.775 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:52.775 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:52.776 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:52.776 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:52.776 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:52.776 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:52.777 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:52.777 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:52.777 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:52.778 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:52.778 list of memzone associated elements. size: 607.928894 MiB 00:05:52.778 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:52.778 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:52.778 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:52.778 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:52.778 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:52.778 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_70017_0 00:05:52.778 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:52.778 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70017_0 00:05:52.778 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:52.778 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70017_0 00:05:52.778 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:52.778 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:52.778 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:52.778 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:52.778 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:52.778 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70017_0 00:05:52.778 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:52.778 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70017 00:05:52.778 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:52.778 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70017 00:05:52.778 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:52.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:52.778 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:52.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:52.778 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:52.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:52.778 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:52.778 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:52.778 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:52.778 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70017 00:05:52.778 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:52.778 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70017 00:05:52.778 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:52.778 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70017 00:05:52.778 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:52.778 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70017 00:05:52.778 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:52.778 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70017 00:05:52.778 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:52.778 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70017 00:05:52.778 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:52.778 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:52.778 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:52.778 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:52.778 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:52.778 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:52.778 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:52.778 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70017 00:05:52.778 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:52.778 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70017 00:05:52.778 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:52.778 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:52.778 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:52.778 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:52.778 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:52.778 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70017 00:05:52.778 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:52.778 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:52.778 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:52.778 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70017 00:05:52.778 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:52.778 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70017 00:05:52.778 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:52.778 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70017 00:05:52.778 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:52.778 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:52.778 23:39:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:52.778 23:39:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70017 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70017 ']' 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70017 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70017 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70017' 00:05:52.778 killing process with pid 70017 00:05:52.778 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70017 00:05:52.779 23:39:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70017 00:05:53.039 00:05:53.039 real 0m1.520s 00:05:53.039 user 0m1.529s 00:05:53.039 sys 0m0.410s 00:05:53.039 23:39:41 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.039 ************************************ 00:05:53.039 END TEST dpdk_mem_utility 00:05:53.039 ************************************ 00:05:53.039 23:39:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.039 23:39:41 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.039 23:39:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.039 23:39:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.039 23:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:53.039 ************************************ 00:05:53.039 START TEST event 00:05:53.039 ************************************ 00:05:53.039 23:39:41 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.299 * Looking for test storage... 00:05:53.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.299 23:39:41 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.299 23:39:41 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.299 23:39:41 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.299 23:39:41 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.299 23:39:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.299 23:39:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.299 23:39:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.299 23:39:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.299 23:39:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.299 23:39:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.299 23:39:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.299 23:39:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.299 23:39:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.299 23:39:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.299 23:39:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.299 23:39:41 event -- scripts/common.sh@344 -- # case "$op" in 00:05:53.299 23:39:41 event -- scripts/common.sh@345 -- # : 1 00:05:53.299 23:39:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.299 23:39:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.299 23:39:41 event -- scripts/common.sh@365 -- # decimal 1 00:05:53.299 23:39:41 event -- scripts/common.sh@353 -- # local d=1 00:05:53.299 23:39:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.299 23:39:41 event -- scripts/common.sh@355 -- # echo 1 00:05:53.299 23:39:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.299 23:39:41 event -- scripts/common.sh@366 -- # decimal 2 00:05:53.299 23:39:41 event -- scripts/common.sh@353 -- # local d=2 00:05:53.299 23:39:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.299 23:39:41 event -- scripts/common.sh@355 -- # echo 2 00:05:53.299 23:39:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.299 23:39:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.299 23:39:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.299 23:39:41 event -- scripts/common.sh@368 -- # return 0 00:05:53.299 23:39:41 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.299 23:39:41 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.299 --rc genhtml_branch_coverage=1 00:05:53.299 --rc genhtml_function_coverage=1 00:05:53.299 --rc genhtml_legend=1 00:05:53.299 --rc geninfo_all_blocks=1 00:05:53.299 --rc geninfo_unexecuted_blocks=1 00:05:53.299 00:05:53.299 ' 00:05:53.299 23:39:41 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.299 --rc genhtml_branch_coverage=1 00:05:53.299 --rc genhtml_function_coverage=1 00:05:53.299 --rc genhtml_legend=1 00:05:53.299 --rc geninfo_all_blocks=1 00:05:53.299 --rc geninfo_unexecuted_blocks=1 00:05:53.299 00:05:53.299 ' 00:05:53.299 23:39:41 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.300 --rc genhtml_branch_coverage=1 00:05:53.300 --rc genhtml_function_coverage=1 00:05:53.300 --rc genhtml_legend=1 00:05:53.300 --rc geninfo_all_blocks=1 00:05:53.300 --rc geninfo_unexecuted_blocks=1 00:05:53.300 00:05:53.300 ' 00:05:53.300 23:39:41 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.300 --rc genhtml_branch_coverage=1 00:05:53.300 --rc genhtml_function_coverage=1 00:05:53.300 --rc genhtml_legend=1 00:05:53.300 --rc geninfo_all_blocks=1 00:05:53.300 --rc geninfo_unexecuted_blocks=1 00:05:53.300 00:05:53.300 ' 00:05:53.300 23:39:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.300 23:39:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.300 23:39:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.300 23:39:41 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:53.300 23:39:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.300 23:39:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.300 ************************************ 00:05:53.300 START TEST event_perf 00:05:53.300 ************************************ 00:05:53.300 23:39:41 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.300 Running I/O for 1 seconds...[2024-11-26 23:39:41.317134] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:53.300 [2024-11-26 23:39:41.317670] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70097 ] 00:05:53.560 [2024-11-26 23:39:41.462931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.560 [2024-11-26 23:39:41.490568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.560 [2024-11-26 23:39:41.490889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.560 [2024-11-26 23:39:41.491089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.560 Running I/O for 1 seconds...[2024-11-26 23:39:41.491178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.503 00:05:54.503 lcore 0: 191246 00:05:54.503 lcore 1: 191244 00:05:54.503 lcore 2: 191244 00:05:54.503 lcore 3: 191246 00:05:54.503 done. 00:05:54.503 00:05:54.503 real 0m1.256s 00:05:54.503 user 0m4.072s 00:05:54.503 sys 0m0.067s 00:05:54.503 23:39:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.503 23:39:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.503 ************************************ 00:05:54.503 END TEST event_perf 00:05:54.503 ************************************ 00:05:54.503 23:39:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.503 23:39:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:54.503 23:39:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.503 23:39:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.503 ************************************ 00:05:54.503 START TEST event_reactor 00:05:54.503 ************************************ 00:05:54.503 23:39:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.763 [2024-11-26 23:39:42.633691] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:54.763 [2024-11-26 23:39:42.633990] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70137 ] 00:05:54.763 [2024-11-26 23:39:42.778697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.763 [2024-11-26 23:39:42.802827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.149 test_start 00:05:56.149 oneshot 00:05:56.149 tick 100 00:05:56.149 tick 100 00:05:56.149 tick 250 00:05:56.149 tick 100 00:05:56.149 tick 100 00:05:56.149 tick 100 00:05:56.149 tick 250 00:05:56.149 tick 500 00:05:56.149 tick 100 00:05:56.149 tick 100 00:05:56.149 tick 250 00:05:56.149 tick 100 00:05:56.149 tick 100 00:05:56.149 test_end 00:05:56.149 00:05:56.149 real 0m1.246s 00:05:56.149 user 0m1.090s 00:05:56.149 sys 0m0.049s 00:05:56.149 ************************************ 00:05:56.149 END TEST event_reactor 00:05:56.149 ************************************ 00:05:56.149 23:39:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.149 23:39:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.149 23:39:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.149 23:39:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:56.149 23:39:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.149 23:39:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.149 ************************************ 00:05:56.149 START TEST event_reactor_perf 00:05:56.149 ************************************ 00:05:56.149 23:39:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.149 [2024-11-26 23:39:43.939385] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:56.149 [2024-11-26 23:39:43.939508] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70168 ] 00:05:56.149 [2024-11-26 23:39:44.085892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.149 [2024-11-26 23:39:44.109540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.088 test_start 00:05:57.088 test_end 00:05:57.088 Performance: 314281 events per second 00:05:57.088 ************************************ 00:05:57.088 END TEST event_reactor_perf 00:05:57.088 ************************************ 00:05:57.088 00:05:57.088 real 0m1.237s 00:05:57.088 user 0m1.080s 00:05:57.088 sys 0m0.051s 00:05:57.088 23:39:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.088 23:39:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.088 23:39:45 event -- event/event.sh@49 -- # uname -s 00:05:57.088 23:39:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:57.088 23:39:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.088 23:39:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.088 23:39:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.088 23:39:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.088 ************************************ 00:05:57.088 START TEST event_scheduler 00:05:57.088 ************************************ 00:05:57.088 23:39:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.347 * Looking for test storage... 00:05:57.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.347 23:39:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.347 --rc genhtml_branch_coverage=1 00:05:57.347 --rc genhtml_function_coverage=1 00:05:57.347 --rc genhtml_legend=1 00:05:57.347 --rc geninfo_all_blocks=1 00:05:57.347 --rc geninfo_unexecuted_blocks=1 00:05:57.347 00:05:57.347 ' 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.347 --rc genhtml_branch_coverage=1 00:05:57.347 --rc genhtml_function_coverage=1 00:05:57.347 --rc genhtml_legend=1 00:05:57.347 --rc geninfo_all_blocks=1 00:05:57.347 --rc geninfo_unexecuted_blocks=1 00:05:57.347 00:05:57.347 ' 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.347 --rc genhtml_branch_coverage=1 00:05:57.347 --rc genhtml_function_coverage=1 00:05:57.347 --rc genhtml_legend=1 00:05:57.347 --rc geninfo_all_blocks=1 00:05:57.347 --rc geninfo_unexecuted_blocks=1 00:05:57.347 00:05:57.347 ' 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.347 --rc genhtml_branch_coverage=1 00:05:57.347 --rc genhtml_function_coverage=1 00:05:57.347 --rc genhtml_legend=1 00:05:57.347 --rc geninfo_all_blocks=1 00:05:57.347 --rc geninfo_unexecuted_blocks=1 00:05:57.347 00:05:57.347 ' 00:05:57.347 23:39:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.347 23:39:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70238 00:05:57.347 23:39:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.347 23:39:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.347 23:39:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70238 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70238 ']' 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.347 23:39:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.347 [2024-11-26 23:39:45.399069] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:05:57.347 [2024-11-26 23:39:45.399199] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70238 ] 00:05:57.605 [2024-11-26 23:39:45.546899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.605 [2024-11-26 23:39:45.575841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.605 [2024-11-26 23:39:45.575950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.605 [2024-11-26 23:39:45.576188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.605 [2024-11-26 23:39:45.576340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:58.271 23:39:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.271 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.271 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.271 POWER: Cannot set governor of lcore 0 to performance 00:05:58.271 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.271 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.271 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:58.271 POWER: Unable to set Power Management Environment for lcore 0 00:05:58.271 [2024-11-26 23:39:46.245627] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:58.271 [2024-11-26 23:39:46.245648] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:58.271 [2024-11-26 23:39:46.245678] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:58.271 [2024-11-26 23:39:46.245706] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:58.271 [2024-11-26 23:39:46.245724] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:58.271 [2024-11-26 23:39:46.245733] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 [2024-11-26 23:39:46.318214] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 ************************************ 00:05:58.271 START TEST scheduler_create_thread 00:05:58.271 ************************************ 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 2 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 3 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 4 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 5 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 6 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 7 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 8 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.271 9 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.271 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.529 10 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.529 23:39:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.461 ************************************ 00:05:59.461 END TEST scheduler_create_thread 00:05:59.461 ************************************ 00:05:59.461 23:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.461 00:05:59.461 real 0m1.170s 00:05:59.461 user 0m0.015s 00:05:59.461 sys 0m0.001s 00:05:59.461 23:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.461 23:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.461 23:39:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.461 23:39:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70238 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70238 ']' 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70238 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70238 00:05:59.461 killing process with pid 70238 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70238' 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70238 00:05:59.461 23:39:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70238 00:06:00.029 [2024-11-26 23:39:47.976221] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:00.029 00:06:00.029 real 0m2.947s 00:06:00.029 user 0m5.098s 00:06:00.029 sys 0m0.345s 00:06:00.029 23:39:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.029 23:39:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.029 ************************************ 00:06:00.029 END TEST event_scheduler 00:06:00.029 ************************************ 00:06:00.288 23:39:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:00.288 23:39:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:00.288 23:39:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.288 23:39:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.288 23:39:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.288 ************************************ 00:06:00.288 START TEST app_repeat 00:06:00.288 ************************************ 00:06:00.288 23:39:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.288 Process app_repeat pid: 70322 00:06:00.288 spdk_app_start Round 0 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70322 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70322' 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.288 23:39:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70322 /var/tmp/spdk-nbd.sock 00:06:00.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.288 23:39:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70322 ']' 00:06:00.288 23:39:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.288 23:39:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.288 23:39:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.288 23:39:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.288 23:39:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.288 [2024-11-26 23:39:48.236806] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:00.289 [2024-11-26 23:39:48.236928] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70322 ] 00:06:00.289 [2024-11-26 23:39:48.374832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.289 [2024-11-26 23:39:48.400443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.289 [2024-11-26 23:39:48.400477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.230 23:39:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.230 23:39:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.230 23:39:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.230 Malloc0 00:06:01.230 23:39:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.489 Malloc1 00:06:01.489 23:39:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.489 23:39:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.750 /dev/nbd0 00:06:01.750 23:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.750 23:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.750 1+0 records in 00:06:01.750 1+0 records out 00:06:01.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235766 s, 17.4 MB/s 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.750 23:39:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.750 23:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.750 23:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.750 23:39:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.009 /dev/nbd1 00:06:02.009 23:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.009 23:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.009 1+0 records in 00:06:02.009 1+0 records out 00:06:02.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188375 s, 21.7 MB/s 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.009 23:39:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.009 23:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.009 23:39:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.009 23:39:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.009 23:39:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.009 23:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.267 23:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.267 { 00:06:02.267 "nbd_device": "/dev/nbd0", 00:06:02.267 "bdev_name": "Malloc0" 00:06:02.267 }, 00:06:02.267 { 00:06:02.267 "nbd_device": "/dev/nbd1", 00:06:02.267 "bdev_name": "Malloc1" 00:06:02.267 } 00:06:02.267 ]' 00:06:02.267 23:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.267 { 00:06:02.267 "nbd_device": "/dev/nbd0", 00:06:02.267 "bdev_name": "Malloc0" 00:06:02.268 }, 00:06:02.268 { 00:06:02.268 "nbd_device": "/dev/nbd1", 00:06:02.268 "bdev_name": "Malloc1" 00:06:02.268 } 00:06:02.268 ]' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.268 /dev/nbd1' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.268 /dev/nbd1' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.268 256+0 records in 00:06:02.268 256+0 records out 00:06:02.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599605 s, 175 MB/s 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.268 256+0 records in 00:06:02.268 256+0 records out 00:06:02.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201445 s, 52.1 MB/s 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.268 256+0 records in 00:06:02.268 256+0 records out 00:06:02.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156886 s, 66.8 MB/s 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.268 23:39:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.527 23:39:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.784 23:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.042 23:39:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.042 23:39:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.300 23:39:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.300 [2024-11-26 23:39:51.312059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.300 [2024-11-26 23:39:51.333720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.300 [2024-11-26 23:39:51.333728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.300 [2024-11-26 23:39:51.376050] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.300 [2024-11-26 23:39:51.376109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.583 23:39:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.583 spdk_app_start Round 1 00:06:06.583 23:39:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.583 23:39:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70322 /var/tmp/spdk-nbd.sock 00:06:06.583 23:39:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70322 ']' 00:06:06.584 23:39:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.584 23:39:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.584 23:39:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.584 23:39:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.584 23:39:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.584 23:39:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.584 23:39:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.584 23:39:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.584 Malloc0 00:06:06.584 23:39:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.842 Malloc1 00:06:06.842 23:39:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.842 23:39:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.843 23:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.843 23:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.843 23:39:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.101 /dev/nbd0 00:06:07.101 23:39:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.101 23:39:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.101 1+0 records in 00:06:07.101 1+0 records out 00:06:07.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506067 s, 8.1 MB/s 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.101 23:39:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.101 23:39:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.101 23:39:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.101 23:39:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.359 /dev/nbd1 00:06:07.359 23:39:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.359 23:39:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.359 1+0 records in 00:06:07.359 1+0 records out 00:06:07.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243348 s, 16.8 MB/s 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.359 23:39:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.359 23:39:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.359 23:39:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.359 23:39:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.359 23:39:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.359 23:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.618 { 00:06:07.618 "nbd_device": "/dev/nbd0", 00:06:07.618 "bdev_name": "Malloc0" 00:06:07.618 }, 00:06:07.618 { 00:06:07.618 "nbd_device": "/dev/nbd1", 00:06:07.618 "bdev_name": "Malloc1" 00:06:07.618 } 00:06:07.618 ]' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.618 { 00:06:07.618 "nbd_device": "/dev/nbd0", 00:06:07.618 "bdev_name": "Malloc0" 00:06:07.618 }, 00:06:07.618 { 00:06:07.618 "nbd_device": "/dev/nbd1", 00:06:07.618 "bdev_name": "Malloc1" 00:06:07.618 } 00:06:07.618 ]' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.618 /dev/nbd1' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.618 /dev/nbd1' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.618 256+0 records in 00:06:07.618 256+0 records out 00:06:07.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00794825 s, 132 MB/s 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.618 256+0 records in 00:06:07.618 256+0 records out 00:06:07.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170442 s, 61.5 MB/s 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.618 256+0 records in 00:06:07.618 256+0 records out 00:06:07.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161616 s, 64.9 MB/s 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.618 23:39:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.888 23:39:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.179 23:39:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.179 23:39:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.180 23:39:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.180 23:39:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.443 23:39:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.705 [2024-11-26 23:39:56.604478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.705 [2024-11-26 23:39:56.625018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.705 [2024-11-26 23:39:56.625093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.705 [2024-11-26 23:39:56.665540] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.705 [2024-11-26 23:39:56.665597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.988 spdk_app_start Round 2 00:06:11.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.988 23:39:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.988 23:39:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.988 23:39:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70322 /var/tmp/spdk-nbd.sock 00:06:11.988 23:39:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70322 ']' 00:06:11.988 23:39:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.988 23:39:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.988 23:39:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.988 23:39:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.988 23:39:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.988 23:39:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.988 23:39:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.988 23:39:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.988 Malloc0 00:06:11.988 23:39:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.988 Malloc1 00:06:12.247 23:40:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.247 /dev/nbd0 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.247 1+0 records in 00:06:12.247 1+0 records out 00:06:12.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291211 s, 14.1 MB/s 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.247 23:40:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.247 23:40:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.506 /dev/nbd1 00:06:12.506 23:40:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.506 23:40:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.506 1+0 records in 00:06:12.506 1+0 records out 00:06:12.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021352 s, 19.2 MB/s 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.506 23:40:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.506 23:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.506 23:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.506 23:40:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.506 23:40:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.506 23:40:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.763 { 00:06:12.763 "nbd_device": "/dev/nbd0", 00:06:12.763 "bdev_name": "Malloc0" 00:06:12.763 }, 00:06:12.763 { 00:06:12.763 "nbd_device": "/dev/nbd1", 00:06:12.763 "bdev_name": "Malloc1" 00:06:12.763 } 00:06:12.763 ]' 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.763 { 00:06:12.763 "nbd_device": "/dev/nbd0", 00:06:12.763 "bdev_name": "Malloc0" 00:06:12.763 }, 00:06:12.763 { 00:06:12.763 "nbd_device": "/dev/nbd1", 00:06:12.763 "bdev_name": "Malloc1" 00:06:12.763 } 00:06:12.763 ]' 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.763 /dev/nbd1' 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.763 /dev/nbd1' 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.763 23:40:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.764 256+0 records in 00:06:12.764 256+0 records out 00:06:12.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00873017 s, 120 MB/s 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.764 256+0 records in 00:06:12.764 256+0 records out 00:06:12.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147873 s, 70.9 MB/s 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.764 256+0 records in 00:06:12.764 256+0 records out 00:06:12.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189137 s, 55.4 MB/s 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.764 23:40:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.022 23:40:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.022 23:40:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.022 23:40:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.022 23:40:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.022 23:40:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.022 23:40:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.022 23:40:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.022 23:40:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.022 23:40:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.279 23:40:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.537 23:40:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.538 23:40:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.538 23:40:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.805 23:40:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.805 [2024-11-26 23:40:01.880443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.805 [2024-11-26 23:40:01.900831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.805 [2024-11-26 23:40:01.900848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.067 [2024-11-26 23:40:01.941764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.067 [2024-11-26 23:40:01.941826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.352 23:40:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70322 /var/tmp/spdk-nbd.sock 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70322 ']' 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.352 23:40:04 event.app_repeat -- event/event.sh@39 -- # killprocess 70322 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70322 ']' 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70322 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.352 23:40:04 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70322 00:06:17.352 killing process with pid 70322 00:06:17.352 23:40:05 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.352 23:40:05 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.352 23:40:05 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70322' 00:06:17.352 23:40:05 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70322 00:06:17.352 23:40:05 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70322 00:06:17.352 spdk_app_start is called in Round 0. 00:06:17.352 Shutdown signal received, stop current app iteration 00:06:17.352 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 reinitialization... 00:06:17.352 spdk_app_start is called in Round 1. 00:06:17.352 Shutdown signal received, stop current app iteration 00:06:17.352 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 reinitialization... 00:06:17.352 spdk_app_start is called in Round 2. 00:06:17.352 Shutdown signal received, stop current app iteration 00:06:17.352 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 reinitialization... 00:06:17.352 spdk_app_start is called in Round 3. 00:06:17.352 Shutdown signal received, stop current app iteration 00:06:17.352 23:40:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.352 23:40:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.352 00:06:17.352 real 0m16.943s 00:06:17.352 user 0m37.814s 00:06:17.352 sys 0m2.176s 00:06:17.352 23:40:05 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.352 23:40:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.352 ************************************ 00:06:17.352 END TEST app_repeat 00:06:17.352 ************************************ 00:06:17.352 23:40:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.352 23:40:05 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:17.352 23:40:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.352 23:40:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.352 23:40:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.352 ************************************ 00:06:17.352 START TEST cpu_locks 00:06:17.352 ************************************ 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:17.352 * Looking for test storage... 00:06:17.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.352 23:40:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.352 --rc genhtml_branch_coverage=1 00:06:17.352 --rc genhtml_function_coverage=1 00:06:17.352 --rc genhtml_legend=1 00:06:17.352 --rc geninfo_all_blocks=1 00:06:17.352 --rc geninfo_unexecuted_blocks=1 00:06:17.352 00:06:17.352 ' 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.352 --rc genhtml_branch_coverage=1 00:06:17.352 --rc genhtml_function_coverage=1 00:06:17.352 --rc genhtml_legend=1 00:06:17.352 --rc geninfo_all_blocks=1 00:06:17.352 --rc geninfo_unexecuted_blocks=1 00:06:17.352 00:06:17.352 ' 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.352 --rc genhtml_branch_coverage=1 00:06:17.352 --rc genhtml_function_coverage=1 00:06:17.352 --rc genhtml_legend=1 00:06:17.352 --rc geninfo_all_blocks=1 00:06:17.352 --rc geninfo_unexecuted_blocks=1 00:06:17.352 00:06:17.352 ' 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.352 --rc genhtml_branch_coverage=1 00:06:17.352 --rc genhtml_function_coverage=1 00:06:17.352 --rc genhtml_legend=1 00:06:17.352 --rc geninfo_all_blocks=1 00:06:17.352 --rc geninfo_unexecuted_blocks=1 00:06:17.352 00:06:17.352 ' 00:06:17.352 23:40:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.352 23:40:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.352 23:40:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.352 23:40:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.352 23:40:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.352 ************************************ 00:06:17.352 START TEST default_locks 00:06:17.352 ************************************ 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70742 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70742 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70742 ']' 00:06:17.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.353 23:40:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.353 [2024-11-26 23:40:05.449324] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:17.353 [2024-11-26 23:40:05.449500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70742 ] 00:06:17.612 [2024-11-26 23:40:05.605302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.612 [2024-11-26 23:40:05.628523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.176 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.176 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:18.177 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70742 00:06:18.177 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70742 00:06:18.177 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70742 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70742 ']' 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70742 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70742 00:06:18.434 killing process with pid 70742 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70742' 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70742 00:06:18.434 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70742 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70742 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70742 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:18.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70742 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70742 ']' 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.692 ERROR: process (pid: 70742) is no longer running 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.692 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70742) - No such process 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.692 ************************************ 00:06:18.692 END TEST default_locks 00:06:18.692 ************************************ 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.692 00:06:18.692 real 0m1.418s 00:06:18.692 user 0m1.402s 00:06:18.692 sys 0m0.453s 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.692 23:40:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.692 23:40:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:18.692 23:40:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.692 23:40:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.692 23:40:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.692 ************************************ 00:06:18.692 START TEST default_locks_via_rpc 00:06:18.692 ************************************ 00:06:18.692 23:40:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:18.692 23:40:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70789 00:06:18.692 23:40:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70789 00:06:18.692 23:40:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70789 ']' 00:06:18.692 23:40:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.692 23:40:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.693 23:40:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.693 23:40:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.693 23:40:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.693 23:40:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.952 [2024-11-26 23:40:06.883811] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:18.952 [2024-11-26 23:40:06.883958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70789 ] 00:06:18.952 [2024-11-26 23:40:07.031155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.952 [2024-11-26 23:40:07.060239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.900 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.900 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70789 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70789 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70789 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70789 ']' 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70789 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70789 00:06:19.901 killing process with pid 70789 00:06:19.901 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.902 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.902 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70789' 00:06:19.902 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70789 00:06:19.902 23:40:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70789 00:06:20.473 ************************************ 00:06:20.473 END TEST default_locks_via_rpc 00:06:20.473 ************************************ 00:06:20.473 00:06:20.473 real 0m1.495s 00:06:20.473 user 0m1.370s 00:06:20.473 sys 0m0.596s 00:06:20.473 23:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.473 23:40:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.473 23:40:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.473 23:40:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.473 23:40:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.473 23:40:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.473 ************************************ 00:06:20.473 START TEST non_locking_app_on_locked_coremask 00:06:20.473 ************************************ 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70836 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70836 /var/tmp/spdk.sock 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70836 ']' 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.473 23:40:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.473 [2024-11-26 23:40:08.425122] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:20.473 [2024-11-26 23:40:08.425263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70836 ] 00:06:20.473 [2024-11-26 23:40:08.572952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.473 [2024-11-26 23:40:08.597661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70852 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70852 /var/tmp/spdk2.sock 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70852 ']' 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.415 23:40:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.415 [2024-11-26 23:40:09.328186] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:21.415 [2024-11-26 23:40:09.328481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70852 ] 00:06:21.415 [2024-11-26 23:40:09.485899] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.415 [2024-11-26 23:40:09.485953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.415 [2024-11-26 23:40:09.534170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.352 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.352 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.352 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70836 00:06:22.353 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70836 00:06:22.353 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70836 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70836 ']' 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70836 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70836 00:06:22.611 killing process with pid 70836 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70836' 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70836 00:06:22.611 23:40:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70836 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70852 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70852 ']' 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70852 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70852 00:06:23.176 killing process with pid 70852 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70852' 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70852 00:06:23.176 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70852 00:06:23.433 ************************************ 00:06:23.433 END TEST non_locking_app_on_locked_coremask 00:06:23.433 ************************************ 00:06:23.433 00:06:23.433 real 0m3.087s 00:06:23.433 user 0m3.312s 00:06:23.433 sys 0m0.865s 00:06:23.433 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.433 23:40:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.433 23:40:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.433 23:40:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.433 23:40:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.433 23:40:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.433 ************************************ 00:06:23.433 START TEST locking_app_on_unlocked_coremask 00:06:23.433 ************************************ 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70910 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70910 /var/tmp/spdk.sock 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70910 ']' 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.433 23:40:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.433 [2024-11-26 23:40:11.555557] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:23.433 [2024-11-26 23:40:11.555678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70910 ] 00:06:23.691 [2024-11-26 23:40:11.685299] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.691 [2024-11-26 23:40:11.685345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.691 [2024-11-26 23:40:11.708764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70926 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70926 /var/tmp/spdk2.sock 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70926 ']' 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.256 23:40:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.513 [2024-11-26 23:40:12.439544] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:24.513 [2024-11-26 23:40:12.439875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70926 ] 00:06:24.513 [2024-11-26 23:40:12.589922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.513 [2024-11-26 23:40:12.637286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70926 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70926 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70910 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70910 ']' 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70910 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70910 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70910' 00:06:25.501 killing process with pid 70910 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70910 00:06:25.501 23:40:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70910 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70926 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70926 ']' 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70926 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70926 00:06:26.104 killing process with pid 70926 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70926' 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70926 00:06:26.104 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70926 00:06:26.361 00:06:26.361 real 0m2.983s 00:06:26.361 user 0m3.183s 00:06:26.361 sys 0m0.820s 00:06:26.361 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.361 ************************************ 00:06:26.361 END TEST locking_app_on_unlocked_coremask 00:06:26.361 ************************************ 00:06:26.361 23:40:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.620 23:40:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.620 23:40:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.620 23:40:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.620 23:40:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.620 ************************************ 00:06:26.620 START TEST locking_app_on_locked_coremask 00:06:26.620 ************************************ 00:06:26.620 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:26.620 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70984 00:06:26.620 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70984 /var/tmp/spdk.sock 00:06:26.620 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70984 ']' 00:06:26.620 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.620 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.620 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.621 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.621 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.621 23:40:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.621 [2024-11-26 23:40:14.587866] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:26.621 [2024-11-26 23:40:14.588003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70984 ] 00:06:26.621 [2024-11-26 23:40:14.725050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.621 [2024-11-26 23:40:14.748123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71000 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71000 /var/tmp/spdk2.sock 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71000 /var/tmp/spdk2.sock 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71000 /var/tmp/spdk2.sock 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71000 ']' 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.554 23:40:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.554 [2024-11-26 23:40:15.455278] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:27.554 [2024-11-26 23:40:15.455613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71000 ] 00:06:27.554 [2024-11-26 23:40:15.612779] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70984 has claimed it. 00:06:27.554 [2024-11-26 23:40:15.612835] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.121 ERROR: process (pid: 71000) is no longer running 00:06:28.121 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71000) - No such process 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70984 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70984 00:06:28.121 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70984 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70984 ']' 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70984 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70984 00:06:28.379 killing process with pid 70984 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70984' 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70984 00:06:28.379 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70984 00:06:28.639 00:06:28.639 real 0m2.139s 00:06:28.639 user 0m2.335s 00:06:28.639 sys 0m0.536s 00:06:28.639 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.639 ************************************ 00:06:28.639 END TEST locking_app_on_locked_coremask 00:06:28.639 ************************************ 00:06:28.639 23:40:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.639 23:40:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:28.639 23:40:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.639 23:40:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.639 23:40:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.639 ************************************ 00:06:28.639 START TEST locking_overlapped_coremask 00:06:28.639 ************************************ 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71042 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71042 /var/tmp/spdk.sock 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71042 ']' 00:06:28.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.639 23:40:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.639 [2024-11-26 23:40:16.761955] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:28.639 [2024-11-26 23:40:16.762071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71042 ] 00:06:28.898 [2024-11-26 23:40:16.899374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.898 [2024-11-26 23:40:16.924711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.898 [2024-11-26 23:40:16.924960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.898 [2024-11-26 23:40:16.924975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71060 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71060 /var/tmp/spdk2.sock 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71060 /var/tmp/spdk2.sock 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71060 /var/tmp/spdk2.sock 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71060 ']' 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.834 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.835 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.835 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.835 23:40:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.835 [2024-11-26 23:40:17.670829] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:29.835 [2024-11-26 23:40:17.671166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71060 ] 00:06:29.835 [2024-11-26 23:40:17.830776] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71042 has claimed it. 00:06:29.835 [2024-11-26 23:40:17.834856] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.402 ERROR: process (pid: 71060) is no longer running 00:06:30.402 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71060) - No such process 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71042 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71042 ']' 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71042 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71042 00:06:30.402 killing process with pid 71042 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71042' 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71042 00:06:30.402 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71042 00:06:30.661 ************************************ 00:06:30.661 END TEST locking_overlapped_coremask 00:06:30.661 ************************************ 00:06:30.661 00:06:30.661 real 0m1.939s 00:06:30.661 user 0m5.405s 00:06:30.661 sys 0m0.398s 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 23:40:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.661 23:40:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.661 23:40:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.661 23:40:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 ************************************ 00:06:30.661 START TEST locking_overlapped_coremask_via_rpc 00:06:30.661 ************************************ 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71102 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71102 /var/tmp/spdk.sock 00:06:30.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71102 ']' 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.661 23:40:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 [2024-11-26 23:40:18.756407] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:30.661 [2024-11-26 23:40:18.756547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71102 ] 00:06:30.920 [2024-11-26 23:40:18.901353] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.920 [2024-11-26 23:40:18.901395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.920 [2024-11-26 23:40:18.926573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.920 [2024-11-26 23:40:18.926600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.920 [2024-11-26 23:40:18.926671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71120 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71120 /var/tmp/spdk2.sock 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71120 ']' 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.489 23:40:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.747 [2024-11-26 23:40:19.663275] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:31.747 [2024-11-26 23:40:19.663602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71120 ] 00:06:31.747 [2024-11-26 23:40:19.823924] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.747 [2024-11-26 23:40:19.823978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.022 [2024-11-26 23:40:19.879531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.023 [2024-11-26 23:40:19.882881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.023 [2024-11-26 23:40:19.882947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.588 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.589 [2024-11-26 23:40:20.528931] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71102 has claimed it. 00:06:32.589 request: 00:06:32.589 { 00:06:32.589 "method": "framework_enable_cpumask_locks", 00:06:32.589 "req_id": 1 00:06:32.589 } 00:06:32.589 Got JSON-RPC error response 00:06:32.589 response: 00:06:32.589 { 00:06:32.589 "code": -32603, 00:06:32.589 "message": "Failed to claim CPU core: 2" 00:06:32.589 } 00:06:32.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71102 /var/tmp/spdk.sock 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71102 ']' 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.589 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71120 /var/tmp/spdk2.sock 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71120 ']' 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.846 ************************************ 00:06:32.846 END TEST locking_overlapped_coremask_via_rpc 00:06:32.846 ************************************ 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.846 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.846 00:06:32.846 real 0m2.284s 00:06:32.846 user 0m1.056s 00:06:32.846 sys 0m0.151s 00:06:32.847 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.847 23:40:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.104 23:40:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:33.104 23:40:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71102 ]] 00:06:33.104 23:40:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71102 00:06:33.104 23:40:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71102 ']' 00:06:33.104 23:40:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71102 00:06:33.104 23:40:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:33.104 23:40:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.104 23:40:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71102 00:06:33.104 killing process with pid 71102 00:06:33.104 23:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.104 23:40:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.104 23:40:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71102' 00:06:33.104 23:40:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71102 00:06:33.104 23:40:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71102 00:06:33.362 23:40:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71120 ]] 00:06:33.362 23:40:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71120 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71120 ']' 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71120 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71120 00:06:33.362 killing process with pid 71120 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71120' 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71120 00:06:33.362 23:40:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71120 00:06:33.621 23:40:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.621 Process with pid 71102 is not found 00:06:33.621 23:40:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.621 23:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71102 ]] 00:06:33.621 23:40:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71102 00:06:33.621 23:40:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71102 ']' 00:06:33.621 23:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71102 00:06:33.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71102) - No such process 00:06:33.621 23:40:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71102 is not found' 00:06:33.621 23:40:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71120 ]] 00:06:33.621 23:40:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71120 00:06:33.621 Process with pid 71120 is not found 00:06:33.621 23:40:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71120 ']' 00:06:33.621 23:40:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71120 00:06:33.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71120) - No such process 00:06:33.621 23:40:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71120 is not found' 00:06:33.621 23:40:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.621 ************************************ 00:06:33.621 END TEST cpu_locks 00:06:33.621 ************************************ 00:06:33.621 00:06:33.621 real 0m16.472s 00:06:33.621 user 0m28.743s 00:06:33.621 sys 0m4.636s 00:06:33.621 23:40:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.621 23:40:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.621 ************************************ 00:06:33.621 END TEST event 00:06:33.621 ************************************ 00:06:33.621 00:06:33.621 real 0m40.556s 00:06:33.621 user 1m18.050s 00:06:33.621 sys 0m7.556s 00:06:33.621 23:40:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.621 23:40:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.621 23:40:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:33.621 23:40:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.621 23:40:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.621 23:40:21 -- common/autotest_common.sh@10 -- # set +x 00:06:33.621 ************************************ 00:06:33.621 START TEST thread 00:06:33.621 ************************************ 00:06:33.621 23:40:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:33.880 * Looking for test storage... 00:06:33.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.880 23:40:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.880 23:40:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.880 23:40:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.880 23:40:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.880 23:40:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.880 23:40:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.880 23:40:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.880 23:40:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.880 23:40:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.880 23:40:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.880 23:40:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.880 23:40:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:33.880 23:40:21 thread -- scripts/common.sh@345 -- # : 1 00:06:33.880 23:40:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.880 23:40:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.880 23:40:21 thread -- scripts/common.sh@365 -- # decimal 1 00:06:33.880 23:40:21 thread -- scripts/common.sh@353 -- # local d=1 00:06:33.880 23:40:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.880 23:40:21 thread -- scripts/common.sh@355 -- # echo 1 00:06:33.880 23:40:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.880 23:40:21 thread -- scripts/common.sh@366 -- # decimal 2 00:06:33.880 23:40:21 thread -- scripts/common.sh@353 -- # local d=2 00:06:33.880 23:40:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.880 23:40:21 thread -- scripts/common.sh@355 -- # echo 2 00:06:33.880 23:40:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.880 23:40:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.880 23:40:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.880 23:40:21 thread -- scripts/common.sh@368 -- # return 0 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.880 --rc genhtml_branch_coverage=1 00:06:33.880 --rc genhtml_function_coverage=1 00:06:33.880 --rc genhtml_legend=1 00:06:33.880 --rc geninfo_all_blocks=1 00:06:33.880 --rc geninfo_unexecuted_blocks=1 00:06:33.880 00:06:33.880 ' 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.880 --rc genhtml_branch_coverage=1 00:06:33.880 --rc genhtml_function_coverage=1 00:06:33.880 --rc genhtml_legend=1 00:06:33.880 --rc geninfo_all_blocks=1 00:06:33.880 --rc geninfo_unexecuted_blocks=1 00:06:33.880 00:06:33.880 ' 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.880 --rc genhtml_branch_coverage=1 00:06:33.880 --rc genhtml_function_coverage=1 00:06:33.880 --rc genhtml_legend=1 00:06:33.880 --rc geninfo_all_blocks=1 00:06:33.880 --rc geninfo_unexecuted_blocks=1 00:06:33.880 00:06:33.880 ' 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.880 --rc genhtml_branch_coverage=1 00:06:33.880 --rc genhtml_function_coverage=1 00:06:33.880 --rc genhtml_legend=1 00:06:33.880 --rc geninfo_all_blocks=1 00:06:33.880 --rc geninfo_unexecuted_blocks=1 00:06:33.880 00:06:33.880 ' 00:06:33.880 23:40:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.880 23:40:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.880 ************************************ 00:06:33.880 START TEST thread_poller_perf 00:06:33.880 ************************************ 00:06:33.880 23:40:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.880 [2024-11-26 23:40:21.913641] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:33.880 [2024-11-26 23:40:21.913872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71247 ] 00:06:34.138 [2024-11-26 23:40:22.054483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.138 [2024-11-26 23:40:22.077088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.138 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:35.070 [2024-11-26T23:40:23.201Z] ====================================== 00:06:35.070 [2024-11-26T23:40:23.201Z] busy:2610635650 (cyc) 00:06:35.070 [2024-11-26T23:40:23.201Z] total_run_count: 411000 00:06:35.070 [2024-11-26T23:40:23.201Z] tsc_hz: 2600000000 (cyc) 00:06:35.070 [2024-11-26T23:40:23.201Z] ====================================== 00:06:35.070 [2024-11-26T23:40:23.201Z] poller_cost: 6351 (cyc), 2442 (nsec) 00:06:35.070 00:06:35.070 real 0m1.240s 00:06:35.070 user 0m1.084s 00:06:35.070 sys 0m0.051s 00:06:35.070 23:40:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.070 23:40:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.070 ************************************ 00:06:35.070 END TEST thread_poller_perf 00:06:35.070 ************************************ 00:06:35.070 23:40:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.070 23:40:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:35.070 23:40:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.070 23:40:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.070 ************************************ 00:06:35.070 START TEST thread_poller_perf 00:06:35.070 ************************************ 00:06:35.070 23:40:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.328 [2024-11-26 23:40:23.201001] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:35.328 [2024-11-26 23:40:23.201236] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71278 ] 00:06:35.328 [2024-11-26 23:40:23.341879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.328 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.328 [2024-11-26 23:40:23.364747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.714 [2024-11-26T23:40:24.845Z] ====================================== 00:06:36.714 [2024-11-26T23:40:24.845Z] busy:2602932556 (cyc) 00:06:36.714 [2024-11-26T23:40:24.845Z] total_run_count: 5216000 00:06:36.714 [2024-11-26T23:40:24.845Z] tsc_hz: 2600000000 (cyc) 00:06:36.714 [2024-11-26T23:40:24.845Z] ====================================== 00:06:36.714 [2024-11-26T23:40:24.845Z] poller_cost: 499 (cyc), 191 (nsec) 00:06:36.714 00:06:36.714 real 0m1.234s 00:06:36.714 user 0m1.071s 00:06:36.714 sys 0m0.058s 00:06:36.714 ************************************ 00:06:36.714 END TEST thread_poller_perf 00:06:36.714 ************************************ 00:06:36.714 23:40:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.714 23:40:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.714 23:40:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.714 00:06:36.714 real 0m2.708s 00:06:36.714 user 0m2.270s 00:06:36.714 sys 0m0.232s 00:06:36.714 ************************************ 00:06:36.714 END TEST thread 00:06:36.714 ************************************ 00:06:36.714 23:40:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.714 23:40:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.714 23:40:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:36.714 23:40:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.714 23:40:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.714 23:40:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.714 23:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:36.714 ************************************ 00:06:36.714 START TEST app_cmdline 00:06:36.714 ************************************ 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.714 * Looking for test storage... 00:06:36.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.714 23:40:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.714 --rc genhtml_branch_coverage=1 00:06:36.714 --rc genhtml_function_coverage=1 00:06:36.714 --rc genhtml_legend=1 00:06:36.714 --rc geninfo_all_blocks=1 00:06:36.714 --rc geninfo_unexecuted_blocks=1 00:06:36.714 00:06:36.714 ' 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.714 --rc genhtml_branch_coverage=1 00:06:36.714 --rc genhtml_function_coverage=1 00:06:36.714 --rc genhtml_legend=1 00:06:36.714 --rc geninfo_all_blocks=1 00:06:36.714 --rc geninfo_unexecuted_blocks=1 00:06:36.714 00:06:36.714 ' 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.714 --rc genhtml_branch_coverage=1 00:06:36.714 --rc genhtml_function_coverage=1 00:06:36.714 --rc genhtml_legend=1 00:06:36.714 --rc geninfo_all_blocks=1 00:06:36.714 --rc geninfo_unexecuted_blocks=1 00:06:36.714 00:06:36.714 ' 00:06:36.714 23:40:24 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.714 --rc genhtml_branch_coverage=1 00:06:36.715 --rc genhtml_function_coverage=1 00:06:36.715 --rc genhtml_legend=1 00:06:36.715 --rc geninfo_all_blocks=1 00:06:36.715 --rc geninfo_unexecuted_blocks=1 00:06:36.715 00:06:36.715 ' 00:06:36.715 23:40:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:36.715 23:40:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71367 00:06:36.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.715 23:40:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71367 00:06:36.715 23:40:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71367 ']' 00:06:36.715 23:40:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.715 23:40:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.715 23:40:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.715 23:40:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:36.715 23:40:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.715 23:40:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.715 [2024-11-26 23:40:24.677882] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:36.715 [2024-11-26 23:40:24.678001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71367 ] 00:06:36.715 [2024-11-26 23:40:24.816231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.715 [2024-11-26 23:40:24.839133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:37.647 { 00:06:37.647 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:06:37.647 "fields": { 00:06:37.647 "major": 25, 00:06:37.647 "minor": 1, 00:06:37.647 "patch": 0, 00:06:37.647 "suffix": "-pre", 00:06:37.647 "commit": "2f2acf4eb" 00:06:37.647 } 00:06:37.647 } 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.647 23:40:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:37.647 23:40:25 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.904 request: 00:06:37.904 { 00:06:37.904 "method": "env_dpdk_get_mem_stats", 00:06:37.904 "req_id": 1 00:06:37.904 } 00:06:37.904 Got JSON-RPC error response 00:06:37.904 response: 00:06:37.904 { 00:06:37.904 "code": -32601, 00:06:37.904 "message": "Method not found" 00:06:37.904 } 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.904 23:40:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71367 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71367 ']' 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71367 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71367 00:06:37.904 killing process with pid 71367 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71367' 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@973 -- # kill 71367 00:06:37.904 23:40:25 app_cmdline -- common/autotest_common.sh@978 -- # wait 71367 00:06:38.162 ************************************ 00:06:38.162 END TEST app_cmdline 00:06:38.162 ************************************ 00:06:38.162 00:06:38.162 real 0m1.754s 00:06:38.162 user 0m2.054s 00:06:38.162 sys 0m0.403s 00:06:38.162 23:40:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.162 23:40:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.162 23:40:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:38.162 23:40:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.162 23:40:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.162 23:40:26 -- common/autotest_common.sh@10 -- # set +x 00:06:38.162 ************************************ 00:06:38.162 START TEST version 00:06:38.162 ************************************ 00:06:38.162 23:40:26 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:38.427 * Looking for test storage... 00:06:38.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:38.427 23:40:26 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.427 23:40:26 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.427 23:40:26 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.427 23:40:26 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.427 23:40:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.427 23:40:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.427 23:40:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.427 23:40:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.427 23:40:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.427 23:40:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.427 23:40:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.427 23:40:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.427 23:40:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.427 23:40:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.427 23:40:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.427 23:40:26 version -- scripts/common.sh@344 -- # case "$op" in 00:06:38.427 23:40:26 version -- scripts/common.sh@345 -- # : 1 00:06:38.427 23:40:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.427 23:40:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.427 23:40:26 version -- scripts/common.sh@365 -- # decimal 1 00:06:38.427 23:40:26 version -- scripts/common.sh@353 -- # local d=1 00:06:38.427 23:40:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.427 23:40:26 version -- scripts/common.sh@355 -- # echo 1 00:06:38.427 23:40:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.427 23:40:26 version -- scripts/common.sh@366 -- # decimal 2 00:06:38.427 23:40:26 version -- scripts/common.sh@353 -- # local d=2 00:06:38.428 23:40:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.428 23:40:26 version -- scripts/common.sh@355 -- # echo 2 00:06:38.428 23:40:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.428 23:40:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.428 23:40:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.428 23:40:26 version -- scripts/common.sh@368 -- # return 0 00:06:38.428 23:40:26 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.428 23:40:26 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.428 --rc genhtml_branch_coverage=1 00:06:38.428 --rc genhtml_function_coverage=1 00:06:38.428 --rc genhtml_legend=1 00:06:38.428 --rc geninfo_all_blocks=1 00:06:38.428 --rc geninfo_unexecuted_blocks=1 00:06:38.428 00:06:38.428 ' 00:06:38.428 23:40:26 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.428 --rc genhtml_branch_coverage=1 00:06:38.428 --rc genhtml_function_coverage=1 00:06:38.428 --rc genhtml_legend=1 00:06:38.428 --rc geninfo_all_blocks=1 00:06:38.428 --rc geninfo_unexecuted_blocks=1 00:06:38.428 00:06:38.428 ' 00:06:38.428 23:40:26 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.428 --rc genhtml_branch_coverage=1 00:06:38.428 --rc genhtml_function_coverage=1 00:06:38.428 --rc genhtml_legend=1 00:06:38.428 --rc geninfo_all_blocks=1 00:06:38.428 --rc geninfo_unexecuted_blocks=1 00:06:38.428 00:06:38.428 ' 00:06:38.428 23:40:26 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.428 --rc genhtml_branch_coverage=1 00:06:38.428 --rc genhtml_function_coverage=1 00:06:38.428 --rc genhtml_legend=1 00:06:38.428 --rc geninfo_all_blocks=1 00:06:38.428 --rc geninfo_unexecuted_blocks=1 00:06:38.428 00:06:38.428 ' 00:06:38.428 23:40:26 version -- app/version.sh@17 -- # get_header_version major 00:06:38.428 23:40:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:38.428 23:40:26 version -- app/version.sh@14 -- # cut -f2 00:06:38.428 23:40:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.428 23:40:26 version -- app/version.sh@17 -- # major=25 00:06:38.428 23:40:26 version -- app/version.sh@18 -- # get_header_version minor 00:06:38.428 23:40:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:38.428 23:40:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.428 23:40:26 version -- app/version.sh@14 -- # cut -f2 00:06:38.428 23:40:26 version -- app/version.sh@18 -- # minor=1 00:06:38.428 23:40:26 version -- app/version.sh@19 -- # get_header_version patch 00:06:38.428 23:40:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:38.428 23:40:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.428 23:40:26 version -- app/version.sh@14 -- # cut -f2 00:06:38.428 23:40:26 version -- app/version.sh@19 -- # patch=0 00:06:38.428 23:40:26 version -- app/version.sh@20 -- # get_header_version suffix 00:06:38.428 23:40:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:38.428 23:40:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.428 23:40:26 version -- app/version.sh@14 -- # cut -f2 00:06:38.428 23:40:26 version -- app/version.sh@20 -- # suffix=-pre 00:06:38.428 23:40:26 version -- app/version.sh@22 -- # version=25.1 00:06:38.428 23:40:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:38.428 23:40:26 version -- app/version.sh@28 -- # version=25.1rc0 00:06:38.428 23:40:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:38.428 23:40:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:38.428 23:40:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:38.428 23:40:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:38.428 00:06:38.428 real 0m0.199s 00:06:38.428 user 0m0.132s 00:06:38.428 sys 0m0.094s 00:06:38.428 ************************************ 00:06:38.428 END TEST version 00:06:38.428 ************************************ 00:06:38.428 23:40:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.428 23:40:26 version -- common/autotest_common.sh@10 -- # set +x 00:06:38.428 23:40:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:38.428 23:40:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:38.428 23:40:26 -- spdk/autotest.sh@194 -- # uname -s 00:06:38.428 23:40:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:38.428 23:40:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:38.428 23:40:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:38.428 23:40:26 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:38.428 23:40:26 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:38.428 23:40:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.428 23:40:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.428 23:40:26 -- common/autotest_common.sh@10 -- # set +x 00:06:38.428 ************************************ 00:06:38.428 START TEST blockdev_nvme 00:06:38.428 ************************************ 00:06:38.428 23:40:26 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:38.686 * Looking for test storage... 00:06:38.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.686 23:40:26 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.686 --rc genhtml_branch_coverage=1 00:06:38.686 --rc genhtml_function_coverage=1 00:06:38.686 --rc genhtml_legend=1 00:06:38.686 --rc geninfo_all_blocks=1 00:06:38.686 --rc geninfo_unexecuted_blocks=1 00:06:38.686 00:06:38.686 ' 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.686 --rc genhtml_branch_coverage=1 00:06:38.686 --rc genhtml_function_coverage=1 00:06:38.686 --rc genhtml_legend=1 00:06:38.686 --rc geninfo_all_blocks=1 00:06:38.686 --rc geninfo_unexecuted_blocks=1 00:06:38.686 00:06:38.686 ' 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.686 --rc genhtml_branch_coverage=1 00:06:38.686 --rc genhtml_function_coverage=1 00:06:38.686 --rc genhtml_legend=1 00:06:38.686 --rc geninfo_all_blocks=1 00:06:38.686 --rc geninfo_unexecuted_blocks=1 00:06:38.686 00:06:38.686 ' 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.686 --rc genhtml_branch_coverage=1 00:06:38.686 --rc genhtml_function_coverage=1 00:06:38.686 --rc genhtml_legend=1 00:06:38.686 --rc geninfo_all_blocks=1 00:06:38.686 --rc geninfo_unexecuted_blocks=1 00:06:38.686 00:06:38.686 ' 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:38.686 23:40:26 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71528 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71528 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71528 ']' 00:06:38.686 23:40:26 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.686 23:40:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.686 [2024-11-26 23:40:26.736063] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:38.686 [2024-11-26 23:40:26.736343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71528 ] 00:06:38.945 [2024-11-26 23:40:26.881756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.945 [2024-11-26 23:40:26.906207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.510 23:40:27 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.510 23:40:27 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:39.510 23:40:27 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:39.510 23:40:27 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:39.510 23:40:27 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:39.510 23:40:27 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:39.510 23:40:27 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:39.510 23:40:27 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:39.510 23:40:27 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.510 23:40:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.768 23:40:27 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.768 23:40:27 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:39.768 23:40:27 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.768 23:40:27 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.768 23:40:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.027 23:40:27 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.027 23:40:27 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:40.027 23:40:27 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.027 23:40:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.027 23:40:27 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.027 23:40:27 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:40.027 23:40:27 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:40.027 23:40:27 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:40.027 23:40:27 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.027 23:40:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.027 23:40:27 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.027 23:40:27 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:40.028 23:40:27 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0987b76c-48f0-4a39-8bf8-b2f5c9253ca2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0987b76c-48f0-4a39-8bf8-b2f5c9253ca2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "cbc53863-57b6-46f1-96a8-383549f1f49b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cbc53863-57b6-46f1-96a8-383549f1f49b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8c5d82cc-46e2-4a04-97d1-5e8d97441c24"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c5d82cc-46e2-4a04-97d1-5e8d97441c24",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "510c9642-f8d2-4dbf-9d0c-c70a5d515f03"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "510c9642-f8d2-4dbf-9d0c-c70a5d515f03",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "eaabd91b-904a-4010-951d-6d126573f748"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eaabd91b-904a-4010-951d-6d126573f748",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8c93d9c3-b0ae-46ea-8204-3662612f2266"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8c93d9c3-b0ae-46ea-8204-3662612f2266",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:40.028 23:40:27 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:40.028 23:40:27 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:40.028 23:40:27 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:40.028 23:40:27 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:40.028 23:40:27 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71528 00:06:40.028 23:40:27 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71528 ']' 00:06:40.028 23:40:27 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71528 00:06:40.028 23:40:27 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:40.028 23:40:28 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.028 23:40:28 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71528 00:06:40.028 23:40:28 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.028 23:40:28 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.028 killing process with pid 71528 00:06:40.028 23:40:28 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71528' 00:06:40.028 23:40:28 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71528 00:06:40.028 23:40:28 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71528 00:06:40.286 23:40:28 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:40.286 23:40:28 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:40.286 23:40:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:40.286 23:40:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.287 23:40:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 ************************************ 00:06:40.287 START TEST bdev_hello_world 00:06:40.287 ************************************ 00:06:40.287 23:40:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:40.287 [2024-11-26 23:40:28.413243] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:40.287 [2024-11-26 23:40:28.413378] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71596 ] 00:06:40.544 [2024-11-26 23:40:28.558473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.544 [2024-11-26 23:40:28.583166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.107 [2024-11-26 23:40:28.971774] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:41.107 [2024-11-26 23:40:28.971842] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:41.107 [2024-11-26 23:40:28.971873] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:41.107 [2024-11-26 23:40:28.974088] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:41.107 [2024-11-26 23:40:28.974439] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:41.107 [2024-11-26 23:40:28.974459] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:41.107 [2024-11-26 23:40:28.974721] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:41.107 00:06:41.107 [2024-11-26 23:40:28.974743] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:41.107 00:06:41.107 real 0m0.796s 00:06:41.107 user 0m0.530s 00:06:41.107 sys 0m0.163s 00:06:41.107 23:40:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.107 ************************************ 00:06:41.107 23:40:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:41.107 END TEST bdev_hello_world 00:06:41.107 ************************************ 00:06:41.107 23:40:29 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:41.107 23:40:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.107 23:40:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.107 23:40:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.107 ************************************ 00:06:41.107 START TEST bdev_bounds 00:06:41.107 ************************************ 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:41.107 Process bdevio pid: 71621 00:06:41.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71621 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71621' 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71621 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71621 ']' 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.107 23:40:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:41.364 [2024-11-26 23:40:29.255721] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:41.364 [2024-11-26 23:40:29.255874] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71621 ] 00:06:41.364 [2024-11-26 23:40:29.403290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.364 [2024-11-26 23:40:29.432873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.364 [2024-11-26 23:40:29.432888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.364 [2024-11-26 23:40:29.432928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.016 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.017 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:42.017 23:40:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:42.275 I/O targets: 00:06:42.275 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:42.275 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:42.275 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:42.275 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:42.275 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:42.275 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:42.275 00:06:42.275 00:06:42.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.275 http://cunit.sourceforge.net/ 00:06:42.275 00:06:42.275 00:06:42.275 Suite: bdevio tests on: Nvme3n1 00:06:42.275 Test: blockdev write read block ...passed 00:06:42.275 Test: blockdev write zeroes read block ...passed 00:06:42.275 Test: blockdev write zeroes read no split ...passed 00:06:42.275 Test: blockdev write zeroes read split ...passed 00:06:42.275 Test: blockdev write zeroes read split partial ...passed 00:06:42.275 Test: blockdev reset ...[2024-11-26 23:40:30.204867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:42.275 passed 00:06:42.276 Test: blockdev write read 8 blocks ...[2024-11-26 23:40:30.206994] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:42.276 passed 00:06:42.276 Test: blockdev write read size > 128k ...passed 00:06:42.276 Test: blockdev write read invalid size ...passed 00:06:42.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.276 Test: blockdev write read max offset ...passed 00:06:42.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.276 Test: blockdev writev readv 8 blocks ...passed 00:06:42.276 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.276 Test: blockdev writev readv block ...passed 00:06:42.276 Test: blockdev writev readv size > 128k ...passed 00:06:42.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.276 Test: blockdev comparev and writev ...[2024-11-26 23:40:30.211583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bac06000 len:0x1000 00:06:42.276 [2024-11-26 23:40:30.211634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.276 passed 00:06:42.276 Test: blockdev nvme passthru rw ...passed 00:06:42.276 Test: blockdev nvme passthru vendor specific ...[2024-11-26 23:40:30.212095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:42.276 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:42.276 [2024-11-26 23:40:30.212212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:42.276 passed 00:06:42.276 Test: blockdev copy ...passed 00:06:42.276 Suite: bdevio tests on: Nvme2n3 00:06:42.276 Test: blockdev write read block ...passed 00:06:42.276 Test: blockdev write zeroes read block ...passed 00:06:42.276 Test: blockdev write zeroes read no split ...passed 00:06:42.276 Test: blockdev write zeroes read split ...passed 00:06:42.276 Test: blockdev write zeroes read split partial ...passed 00:06:42.276 Test: blockdev reset ...[2024-11-26 23:40:30.225738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:42.276 [2024-11-26 23:40:30.227836] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:42.276 Test: blockdev write read 8 blocks ...passed 00:06:42.276 Test: blockdev write read size > 128k ...uccessful. 00:06:42.276 passed 00:06:42.276 Test: blockdev write read invalid size ...passed 00:06:42.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.276 Test: blockdev write read max offset ...passed 00:06:42.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.276 Test: blockdev writev readv 8 blocks ...passed 00:06:42.276 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.276 Test: blockdev writev readv block ...passed 00:06:42.276 Test: blockdev writev readv size > 128k ...passed 00:06:42.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.276 Test: blockdev comparev and writev ...[2024-11-26 23:40:30.231810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7402000 len:0x1000 00:06:42.276 [2024-11-26 23:40:30.231848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.276 passed 00:06:42.276 Test: blockdev nvme passthru rw ...passed 00:06:42.276 Test: blockdev nvme passthru vendor specific ...passed 00:06:42.276 Test: blockdev nvme admin passthru ...[2024-11-26 23:40:30.232346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:42.276 [2024-11-26 23:40:30.232371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:42.276 passed 00:06:42.276 Test: blockdev copy ...passed 00:06:42.276 Suite: bdevio tests on: Nvme2n2 00:06:42.276 Test: blockdev write read block ...passed 00:06:42.276 Test: blockdev write zeroes read block ...passed 00:06:42.276 Test: blockdev write zeroes read no split ...passed 00:06:42.276 Test: blockdev write zeroes read split ...passed 00:06:42.276 Test: blockdev write zeroes read split partial ...passed 00:06:42.276 Test: blockdev reset ...[2024-11-26 23:40:30.251609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:42.276 [2024-11-26 23:40:30.253625] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:42.276 passed 00:06:42.276 Test: blockdev write read 8 blocks ...passed 00:06:42.276 Test: blockdev write read size > 128k ...passed 00:06:42.276 Test: blockdev write read invalid size ...passed 00:06:42.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.276 Test: blockdev write read max offset ...passed 00:06:42.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.276 Test: blockdev writev readv 8 blocks ...passed 00:06:42.276 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.276 Test: blockdev writev readv block ...passed 00:06:42.276 Test: blockdev writev readv size > 128k ...passed 00:06:42.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.276 Test: blockdev comparev and writev ...[2024-11-26 23:40:30.258010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd43b000 len:0x1000 00:06:42.276 [2024-11-26 23:40:30.258050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.276 passed 00:06:42.276 Test: blockdev nvme passthru rw ...passed 00:06:42.276 Test: blockdev nvme passthru vendor specific ...passed 00:06:42.276 Test: blockdev nvme admin passthru ...[2024-11-26 23:40:30.258584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:42.276 [2024-11-26 23:40:30.258608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:42.276 passed 00:06:42.276 Test: blockdev copy ...passed 00:06:42.276 Suite: bdevio tests on: Nvme2n1 00:06:42.276 Test: blockdev write read block ...passed 00:06:42.276 Test: blockdev write zeroes read block ...passed 00:06:42.276 Test: blockdev write zeroes read no split ...passed 00:06:42.276 Test: blockdev write zeroes read split ...passed 00:06:42.276 Test: blockdev write zeroes read split partial ...passed 00:06:42.276 Test: blockdev reset ...[2024-11-26 23:40:30.271766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:42.276 [2024-11-26 23:40:30.273605] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:42.276 passed 00:06:42.276 Test: blockdev write read 8 blocks ...passed 00:06:42.276 Test: blockdev write read size > 128k ...passed 00:06:42.276 Test: blockdev write read invalid size ...passed 00:06:42.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.276 Test: blockdev write read max offset ...passed 00:06:42.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.276 Test: blockdev writev readv 8 blocks ...passed 00:06:42.276 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.276 Test: blockdev writev readv block ...passed 00:06:42.276 Test: blockdev writev readv size > 128k ...passed 00:06:42.276 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.276 Test: blockdev comparev and writev ...[2024-11-26 23:40:30.277576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd437000 len:0x1000 00:06:42.277 [2024-11-26 23:40:30.277612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.277 passed 00:06:42.277 Test: blockdev nvme passthru rw ...passed 00:06:42.277 Test: blockdev nvme passthru vendor specific ...passed 00:06:42.277 Test: blockdev nvme admin passthru ...[2024-11-26 23:40:30.278094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:42.277 [2024-11-26 23:40:30.278120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:42.277 passed 00:06:42.277 Test: blockdev copy ...passed 00:06:42.277 Suite: bdevio tests on: Nvme1n1 00:06:42.277 Test: blockdev write read block ...passed 00:06:42.277 Test: blockdev write zeroes read block ...passed 00:06:42.277 Test: blockdev write zeroes read no split ...passed 00:06:42.277 Test: blockdev write zeroes read split ...passed 00:06:42.277 Test: blockdev write zeroes read split partial ...passed 00:06:42.277 Test: blockdev reset ...[2024-11-26 23:40:30.293339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:42.277 [2024-11-26 23:40:30.294887] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:42.277 Test: blockdev write read 8 blocks ...passed 00:06:42.277 Test: blockdev write read size > 128k ...uccessful. 00:06:42.277 passed 00:06:42.277 Test: blockdev write read invalid size ...passed 00:06:42.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.277 Test: blockdev write read max offset ...passed 00:06:42.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.277 Test: blockdev writev readv 8 blocks ...passed 00:06:42.277 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.277 Test: blockdev writev readv block ...passed 00:06:42.277 Test: blockdev writev readv size > 128k ...passed 00:06:42.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.277 Test: blockdev comparev and writev ...[2024-11-26 23:40:30.298743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd433000 len:0x1000 00:06:42.277 [2024-11-26 23:40:30.298779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:42.277 passed 00:06:42.277 Test: blockdev nvme passthru rw ...passed 00:06:42.277 Test: blockdev nvme passthru vendor specific ...passed 00:06:42.277 Test: blockdev nvme admin passthru ...[2024-11-26 23:40:30.299327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:42.277 [2024-11-26 23:40:30.299353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:42.277 passed 00:06:42.277 Test: blockdev copy ...passed 00:06:42.277 Suite: bdevio tests on: Nvme0n1 00:06:42.277 Test: blockdev write read block ...passed 00:06:42.277 Test: blockdev write zeroes read block ...passed 00:06:42.277 Test: blockdev write zeroes read no split ...passed 00:06:42.277 Test: blockdev write zeroes read split ...passed 00:06:42.277 Test: blockdev write zeroes read split partial ...passed 00:06:42.277 Test: blockdev reset ...[2024-11-26 23:40:30.319523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:42.277 [2024-11-26 23:40:30.321270] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:42.277 passed 00:06:42.277 Test: blockdev write read 8 blocks ...passed 00:06:42.277 Test: blockdev write read size > 128k ...passed 00:06:42.277 Test: blockdev write read invalid size ...passed 00:06:42.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:42.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:42.277 Test: blockdev write read max offset ...passed 00:06:42.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:42.277 Test: blockdev writev readv 8 blocks ...passed 00:06:42.277 Test: blockdev writev readv 30 x 1block ...passed 00:06:42.277 Test: blockdev writev readv block ...passed 00:06:42.277 Test: blockdev writev readv size > 128k ...passed 00:06:42.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:42.277 Test: blockdev comparev and writev ...[2024-11-26 23:40:30.325537] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:42.277 separate metadata which is not supported yet. 00:06:42.277 passed 00:06:42.277 Test: blockdev nvme passthru rw ...passed 00:06:42.277 Test: blockdev nvme passthru vendor specific ...[2024-11-26 23:40:30.325994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:42.277 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:42.277 [2024-11-26 23:40:30.326114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:42.277 passed 00:06:42.277 Test: blockdev copy ...passed 00:06:42.277 00:06:42.277 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.277 suites 6 6 n/a 0 0 00:06:42.277 tests 138 138 138 0 0 00:06:42.277 asserts 893 893 893 0 n/a 00:06:42.277 00:06:42.277 Elapsed time = 0.325 seconds 00:06:42.277 0 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71621 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71621 ']' 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71621 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71621 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71621' 00:06:42.277 killing process with pid 71621 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71621 00:06:42.277 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71621 00:06:42.535 23:40:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:42.535 00:06:42.535 real 0m1.313s 00:06:42.535 user 0m3.319s 00:06:42.535 sys 0m0.299s 00:06:42.535 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.535 23:40:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:42.535 ************************************ 00:06:42.535 END TEST bdev_bounds 00:06:42.535 ************************************ 00:06:42.535 23:40:30 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:42.535 23:40:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:42.535 23:40:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.535 23:40:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.535 ************************************ 00:06:42.535 START TEST bdev_nbd 00:06:42.535 ************************************ 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71675 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71675 /var/tmp/spdk-nbd.sock 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71675 ']' 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.535 23:40:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:42.535 [2024-11-26 23:40:30.619849] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:42.535 [2024-11-26 23:40:30.620153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.799 [2024-11-26 23:40:30.770076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.799 [2024-11-26 23:40:30.801041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:43.369 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.626 1+0 records in 00:06:43.626 1+0 records out 00:06:43.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593329 s, 6.9 MB/s 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:43.626 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.884 1+0 records in 00:06:43.884 1+0 records out 00:06:43.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538366 s, 7.6 MB/s 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:43.884 23:40:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.142 1+0 records in 00:06:44.142 1+0 records out 00:06:44.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280172 s, 14.6 MB/s 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:44.142 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.400 1+0 records in 00:06:44.400 1+0 records out 00:06:44.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491067 s, 8.3 MB/s 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:44.400 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.657 1+0 records in 00:06:44.657 1+0 records out 00:06:44.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032007 s, 12.8 MB/s 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:44.657 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.915 1+0 records in 00:06:44.915 1+0 records out 00:06:44.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040319 s, 10.2 MB/s 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:44.915 23:40:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd0", 00:06:45.173 "bdev_name": "Nvme0n1" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd1", 00:06:45.173 "bdev_name": "Nvme1n1" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd2", 00:06:45.173 "bdev_name": "Nvme2n1" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd3", 00:06:45.173 "bdev_name": "Nvme2n2" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd4", 00:06:45.173 "bdev_name": "Nvme2n3" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd5", 00:06:45.173 "bdev_name": "Nvme3n1" 00:06:45.173 } 00:06:45.173 ]' 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd0", 00:06:45.173 "bdev_name": "Nvme0n1" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd1", 00:06:45.173 "bdev_name": "Nvme1n1" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd2", 00:06:45.173 "bdev_name": "Nvme2n1" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd3", 00:06:45.173 "bdev_name": "Nvme2n2" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd4", 00:06:45.173 "bdev_name": "Nvme2n3" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd5", 00:06:45.173 "bdev_name": "Nvme3n1" 00:06:45.173 } 00:06:45.173 ]' 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.173 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.430 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.687 23:40:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.945 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.202 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.460 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:46.718 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:46.719 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:46.996 /dev/nbd0 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.996 1+0 records in 00:06:46.996 1+0 records out 00:06:46.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638502 s, 6.4 MB/s 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:46.996 23:40:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:47.254 /dev/nbd1 00:06:47.254 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.254 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.254 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:47.254 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.254 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.255 1+0 records in 00:06:47.255 1+0 records out 00:06:47.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296573 s, 13.8 MB/s 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:47.255 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:47.512 /dev/nbd10 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.512 1+0 records in 00:06:47.512 1+0 records out 00:06:47.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455531 s, 9.0 MB/s 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:47.512 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:47.770 /dev/nbd11 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.770 1+0 records in 00:06:47.770 1+0 records out 00:06:47.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560589 s, 7.3 MB/s 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:47.770 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:48.027 /dev/nbd12 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.027 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.028 1+0 records in 00:06:48.028 1+0 records out 00:06:48.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339604 s, 12.1 MB/s 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:48.028 23:40:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:48.028 /dev/nbd13 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.285 1+0 records in 00:06:48.285 1+0 records out 00:06:48.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412367 s, 9.9 MB/s 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.285 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.286 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd0", 00:06:48.286 "bdev_name": "Nvme0n1" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd1", 00:06:48.286 "bdev_name": "Nvme1n1" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd10", 00:06:48.286 "bdev_name": "Nvme2n1" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd11", 00:06:48.286 "bdev_name": "Nvme2n2" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd12", 00:06:48.286 "bdev_name": "Nvme2n3" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd13", 00:06:48.286 "bdev_name": "Nvme3n1" 00:06:48.286 } 00:06:48.286 ]' 00:06:48.286 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd0", 00:06:48.286 "bdev_name": "Nvme0n1" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd1", 00:06:48.286 "bdev_name": "Nvme1n1" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd10", 00:06:48.286 "bdev_name": "Nvme2n1" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd11", 00:06:48.286 "bdev_name": "Nvme2n2" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd12", 00:06:48.286 "bdev_name": "Nvme2n3" 00:06:48.286 }, 00:06:48.286 { 00:06:48.286 "nbd_device": "/dev/nbd13", 00:06:48.286 "bdev_name": "Nvme3n1" 00:06:48.286 } 00:06:48.286 ]' 00:06:48.286 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:48.555 /dev/nbd1 00:06:48.555 /dev/nbd10 00:06:48.555 /dev/nbd11 00:06:48.555 /dev/nbd12 00:06:48.555 /dev/nbd13' 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:48.555 /dev/nbd1 00:06:48.555 /dev/nbd10 00:06:48.555 /dev/nbd11 00:06:48.555 /dev/nbd12 00:06:48.555 /dev/nbd13' 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:48.555 256+0 records in 00:06:48.555 256+0 records out 00:06:48.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420729 s, 249 MB/s 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:48.555 256+0 records in 00:06:48.555 256+0 records out 00:06:48.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0632958 s, 16.6 MB/s 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.555 256+0 records in 00:06:48.555 256+0 records out 00:06:48.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.071904 s, 14.6 MB/s 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:48.555 256+0 records in 00:06:48.555 256+0 records out 00:06:48.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0620039 s, 16.9 MB/s 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.555 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:48.813 256+0 records in 00:06:48.813 256+0 records out 00:06:48.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0639426 s, 16.4 MB/s 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:48.813 256+0 records in 00:06:48.813 256+0 records out 00:06:48.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0694235 s, 15.1 MB/s 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:48.813 256+0 records in 00:06:48.813 256+0 records out 00:06:48.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.065732 s, 16.0 MB/s 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:48.813 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.814 23:40:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.071 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.328 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.328 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.328 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.328 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.328 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.329 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.329 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.329 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.329 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.329 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.585 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.843 23:40:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.101 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:50.360 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:50.618 malloc_lvol_verify 00:06:50.619 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:50.876 33cf3989-7d85-43a1-93ac-2204e90db2fe 00:06:50.876 23:40:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:51.133 b74f2c7d-f193-4d46-ad25-3bb614c60c0e 00:06:51.133 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:51.392 /dev/nbd0 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:51.392 mke2fs 1.47.0 (5-Feb-2023) 00:06:51.392 Discarding device blocks: 0/4096 done 00:06:51.392 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:51.392 00:06:51.392 Allocating group tables: 0/1 done 00:06:51.392 Writing inode tables: 0/1 done 00:06:51.392 Creating journal (1024 blocks): done 00:06:51.392 Writing superblocks and filesystem accounting information: 0/1 done 00:06:51.392 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.392 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71675 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71675 ']' 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71675 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71675 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.650 killing process with pid 71675 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71675' 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71675 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71675 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:51.650 00:06:51.650 real 0m9.194s 00:06:51.650 user 0m13.505s 00:06:51.650 sys 0m3.109s 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.650 23:40:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:51.650 ************************************ 00:06:51.650 END TEST bdev_nbd 00:06:51.650 ************************************ 00:06:51.650 23:40:39 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:51.650 23:40:39 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:51.650 skipping fio tests on NVMe due to multi-ns failures. 00:06:51.650 23:40:39 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:51.650 23:40:39 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:51.650 23:40:39 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:51.650 23:40:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:51.650 23:40:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.650 23:40:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.909 ************************************ 00:06:51.909 START TEST bdev_verify 00:06:51.909 ************************************ 00:06:51.909 23:40:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:51.909 [2024-11-26 23:40:39.845328] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:51.909 [2024-11-26 23:40:39.845938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72039 ] 00:06:51.909 [2024-11-26 23:40:39.993732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.909 [2024-11-26 23:40:40.020264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.909 [2024-11-26 23:40:40.020340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.476 Running I/O for 5 seconds... 00:06:54.810 23616.00 IOPS, 92.25 MiB/s [2024-11-26T23:40:43.873Z] 24064.00 IOPS, 94.00 MiB/s [2024-11-26T23:40:44.803Z] 25344.00 IOPS, 99.00 MiB/s [2024-11-26T23:40:45.737Z] 25952.00 IOPS, 101.38 MiB/s [2024-11-26T23:40:45.737Z] 25689.60 IOPS, 100.35 MiB/s 00:06:57.606 Latency(us) 00:06:57.606 [2024-11-26T23:40:45.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.606 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.606 Verification LBA range: start 0x0 length 0xbd0bd 00:06:57.606 Nvme0n1 : 5.06 2174.75 8.50 0.00 0.00 58722.26 10334.52 64931.05 00:06:57.606 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.606 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:57.607 Nvme0n1 : 5.07 2070.64 8.09 0.00 0.00 61672.66 10536.17 69770.63 00:06:57.607 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x0 length 0xa0000 00:06:57.607 Nvme1n1 : 5.06 2174.27 8.49 0.00 0.00 58664.25 10687.41 60494.77 00:06:57.607 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0xa0000 length 0xa0000 00:06:57.607 Nvme1n1 : 5.07 2070.05 8.09 0.00 0.00 61510.10 10737.82 55251.89 00:06:57.607 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x0 length 0x80000 00:06:57.607 Nvme2n1 : 5.06 2173.79 8.49 0.00 0.00 58567.97 10435.35 58074.98 00:06:57.607 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x80000 length 0x80000 00:06:57.607 Nvme2n1 : 5.07 2068.83 8.08 0.00 0.00 61381.82 12603.08 55655.19 00:06:57.607 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x0 length 0x80000 00:06:57.607 Nvme2n2 : 5.07 2173.35 8.49 0.00 0.00 58472.60 10334.52 56865.08 00:06:57.607 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x80000 length 0x80000 00:06:57.607 Nvme2n2 : 5.07 2068.30 8.08 0.00 0.00 61275.16 12703.90 58074.98 00:06:57.607 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x0 length 0x80000 00:06:57.607 Nvme2n3 : 5.07 2172.89 8.49 0.00 0.00 58366.99 10183.29 59688.17 00:06:57.607 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x80000 length 0x80000 00:06:57.607 Nvme2n3 : 5.08 2067.76 8.08 0.00 0.00 61173.83 10838.65 60091.47 00:06:57.607 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x0 length 0x20000 00:06:57.607 Nvme3n1 : 5.07 2172.41 8.49 0.00 0.00 58270.14 8318.03 61704.66 00:06:57.607 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.607 Verification LBA range: start 0x20000 length 0x20000 00:06:57.607 Nvme3n1 : 5.08 2067.22 8.08 0.00 0.00 61121.51 8368.44 61704.66 00:06:57.607 [2024-11-26T23:40:45.738Z] =================================================================================================================== 00:06:57.607 [2024-11-26T23:40:45.738Z] Total : 25454.25 99.43 0.00 0.00 59899.40 8318.03 69770.63 00:06:58.173 00:06:58.173 real 0m6.374s 00:06:58.173 user 0m12.085s 00:06:58.173 sys 0m0.192s 00:06:58.173 23:40:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.173 23:40:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.173 ************************************ 00:06:58.173 END TEST bdev_verify 00:06:58.173 ************************************ 00:06:58.173 23:40:46 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:58.173 23:40:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:58.173 23:40:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.173 23:40:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.173 ************************************ 00:06:58.173 START TEST bdev_verify_big_io 00:06:58.173 ************************************ 00:06:58.173 23:40:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:58.173 [2024-11-26 23:40:46.259110] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:06:58.173 [2024-11-26 23:40:46.259225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72130 ] 00:06:58.431 [2024-11-26 23:40:46.403824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.431 [2024-11-26 23:40:46.429583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.432 [2024-11-26 23:40:46.429677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.997 Running I/O for 5 seconds... 00:07:03.460 1227.00 IOPS, 76.69 MiB/s [2024-11-26T23:40:52.964Z] 2862.50 IOPS, 178.91 MiB/s [2024-11-26T23:40:52.964Z] 3429.33 IOPS, 214.33 MiB/s 00:07:04.833 Latency(us) 00:07:04.833 [2024-11-26T23:40:52.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.833 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x0 length 0xbd0b 00:07:04.833 Nvme0n1 : 5.49 151.55 9.47 0.00 0.00 810873.70 9527.93 1109877.37 00:07:04.833 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:04.833 Nvme0n1 : 5.63 158.90 9.93 0.00 0.00 784689.14 17442.66 987274.63 00:07:04.833 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x0 length 0xa000 00:07:04.833 Nvme1n1 : 5.69 143.51 8.97 0.00 0.00 820951.04 100018.02 1355082.83 00:07:04.833 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0xa000 length 0xa000 00:07:04.833 Nvme1n1 : 5.63 155.11 9.69 0.00 0.00 769779.10 59688.17 819502.47 00:07:04.833 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x0 length 0x8000 00:07:04.833 Nvme2n1 : 5.80 158.61 9.91 0.00 0.00 727023.29 54848.59 1051802.39 00:07:04.833 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x8000 length 0x8000 00:07:04.833 Nvme2n1 : 5.70 157.61 9.85 0.00 0.00 735808.40 108083.99 774333.05 00:07:04.833 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x0 length 0x8000 00:07:04.833 Nvme2n2 : 5.85 159.89 9.99 0.00 0.00 693599.67 47992.52 1071160.71 00:07:04.833 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x8000 length 0x8000 00:07:04.833 Nvme2n2 : 5.75 166.96 10.44 0.00 0.00 684121.90 43354.58 764653.88 00:07:04.833 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x0 length 0x8000 00:07:04.833 Nvme2n3 : 5.88 171.54 10.72 0.00 0.00 631508.88 9931.22 1464780.01 00:07:04.833 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x8000 length 0x8000 00:07:04.833 Nvme2n3 : 5.81 176.35 11.02 0.00 0.00 631026.61 31457.28 838860.80 00:07:04.833 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x0 length 0x2000 00:07:04.833 Nvme3n1 : 5.99 253.45 15.84 0.00 0.00 418147.91 222.13 1490591.11 00:07:04.833 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.833 Verification LBA range: start 0x2000 length 0x2000 00:07:04.833 Nvme3n1 : 5.86 196.55 12.28 0.00 0.00 550928.49 535.63 922746.88 00:07:04.833 [2024-11-26T23:40:52.964Z] =================================================================================================================== 00:07:04.833 [2024-11-26T23:40:52.964Z] Total : 2050.03 128.13 0.00 0.00 668243.97 222.13 1490591.11 00:07:06.205 00:07:06.205 real 0m7.801s 00:07:06.205 user 0m14.910s 00:07:06.205 sys 0m0.225s 00:07:06.205 23:40:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.205 23:40:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:06.205 ************************************ 00:07:06.205 END TEST bdev_verify_big_io 00:07:06.205 ************************************ 00:07:06.205 23:40:54 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:06.205 23:40:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:06.205 23:40:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.205 23:40:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.205 ************************************ 00:07:06.205 START TEST bdev_write_zeroes 00:07:06.205 ************************************ 00:07:06.205 23:40:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:06.205 [2024-11-26 23:40:54.089094] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:06.205 [2024-11-26 23:40:54.089191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72230 ] 00:07:06.205 [2024-11-26 23:40:54.224482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.205 [2024-11-26 23:40:54.246720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.770 Running I/O for 1 seconds... 00:07:07.703 72576.00 IOPS, 283.50 MiB/s 00:07:07.703 Latency(us) 00:07:07.703 [2024-11-26T23:40:55.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.703 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:07.703 Nvme0n1 : 1.02 12068.78 47.14 0.00 0.00 10585.12 8872.57 20669.05 00:07:07.703 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:07.703 Nvme1n1 : 1.02 12054.29 47.09 0.00 0.00 10585.91 8872.57 20568.22 00:07:07.703 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:07.703 Nvme2n1 : 1.02 12040.63 47.03 0.00 0.00 10576.03 8771.74 19862.45 00:07:07.703 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:07.703 Nvme2n2 : 1.02 12027.03 46.98 0.00 0.00 10572.57 8872.57 19459.15 00:07:07.704 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:07.704 Nvme2n3 : 1.02 12013.15 46.93 0.00 0.00 10565.45 8922.98 18450.90 00:07:07.704 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:07.704 Nvme3n1 : 1.02 11999.51 46.87 0.00 0.00 10545.75 8418.86 19761.62 00:07:07.704 [2024-11-26T23:40:55.835Z] =================================================================================================================== 00:07:07.704 [2024-11-26T23:40:55.835Z] Total : 72203.38 282.04 0.00 0.00 10571.81 8418.86 20669.05 00:07:07.961 00:07:07.961 real 0m1.818s 00:07:07.961 user 0m1.549s 00:07:07.961 sys 0m0.161s 00:07:07.961 23:40:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.961 23:40:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:07.961 ************************************ 00:07:07.961 END TEST bdev_write_zeroes 00:07:07.961 ************************************ 00:07:07.961 23:40:55 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.961 23:40:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:07.961 23:40:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.961 23:40:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.961 ************************************ 00:07:07.961 START TEST bdev_json_nonenclosed 00:07:07.961 ************************************ 00:07:07.961 23:40:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.961 [2024-11-26 23:40:55.959111] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:07.962 [2024-11-26 23:40:55.959227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72267 ] 00:07:08.228 [2024-11-26 23:40:56.098572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.228 [2024-11-26 23:40:56.122760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.228 [2024-11-26 23:40:56.122868] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:08.228 [2024-11-26 23:40:56.122883] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:08.228 [2024-11-26 23:40:56.122898] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.228 00:07:08.228 real 0m0.289s 00:07:08.228 user 0m0.105s 00:07:08.228 sys 0m0.082s 00:07:08.228 23:40:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.228 23:40:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:08.228 ************************************ 00:07:08.228 END TEST bdev_json_nonenclosed 00:07:08.228 ************************************ 00:07:08.228 23:40:56 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.228 23:40:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:08.228 23:40:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.228 23:40:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.228 ************************************ 00:07:08.228 START TEST bdev_json_nonarray 00:07:08.228 ************************************ 00:07:08.228 23:40:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.228 [2024-11-26 23:40:56.290698] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:08.228 [2024-11-26 23:40:56.290836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72292 ] 00:07:08.485 [2024-11-26 23:40:56.435604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.485 [2024-11-26 23:40:56.459942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.485 [2024-11-26 23:40:56.460046] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:08.485 [2024-11-26 23:40:56.460066] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:08.485 [2024-11-26 23:40:56.460079] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.485 00:07:08.485 real 0m0.296s 00:07:08.485 user 0m0.118s 00:07:08.485 sys 0m0.076s 00:07:08.485 23:40:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.485 23:40:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:08.485 ************************************ 00:07:08.485 END TEST bdev_json_nonarray 00:07:08.485 ************************************ 00:07:08.485 23:40:56 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:08.485 23:40:56 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:08.485 23:40:56 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:08.485 23:40:56 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:08.485 23:40:56 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:08.485 23:40:56 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:08.485 23:40:56 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:08.485 23:40:56 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:08.486 23:40:56 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:08.486 23:40:56 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:08.486 23:40:56 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:08.486 00:07:08.486 real 0m30.053s 00:07:08.486 user 0m48.066s 00:07:08.486 sys 0m5.000s 00:07:08.486 23:40:56 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.486 23:40:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.486 ************************************ 00:07:08.486 END TEST blockdev_nvme 00:07:08.486 ************************************ 00:07:08.486 23:40:56 -- spdk/autotest.sh@209 -- # uname -s 00:07:08.486 23:40:56 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:08.486 23:40:56 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:08.486 23:40:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.486 23:40:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.486 23:40:56 -- common/autotest_common.sh@10 -- # set +x 00:07:08.486 ************************************ 00:07:08.486 START TEST blockdev_nvme_gpt 00:07:08.486 ************************************ 00:07:08.486 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:08.744 * Looking for test storage... 00:07:08.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:08.744 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.744 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:08.744 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.744 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:08.744 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.745 23:40:56 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:08.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.745 --rc genhtml_branch_coverage=1 00:07:08.745 --rc genhtml_function_coverage=1 00:07:08.745 --rc genhtml_legend=1 00:07:08.745 --rc geninfo_all_blocks=1 00:07:08.745 --rc geninfo_unexecuted_blocks=1 00:07:08.745 00:07:08.745 ' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:08.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.745 --rc genhtml_branch_coverage=1 00:07:08.745 --rc genhtml_function_coverage=1 00:07:08.745 --rc genhtml_legend=1 00:07:08.745 --rc geninfo_all_blocks=1 00:07:08.745 --rc geninfo_unexecuted_blocks=1 00:07:08.745 00:07:08.745 ' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:08.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.745 --rc genhtml_branch_coverage=1 00:07:08.745 --rc genhtml_function_coverage=1 00:07:08.745 --rc genhtml_legend=1 00:07:08.745 --rc geninfo_all_blocks=1 00:07:08.745 --rc geninfo_unexecuted_blocks=1 00:07:08.745 00:07:08.745 ' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:08.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.745 --rc genhtml_branch_coverage=1 00:07:08.745 --rc genhtml_function_coverage=1 00:07:08.745 --rc genhtml_legend=1 00:07:08.745 --rc geninfo_all_blocks=1 00:07:08.745 --rc geninfo_unexecuted_blocks=1 00:07:08.745 00:07:08.745 ' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72365 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72365 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72365 ']' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.745 23:40:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.745 [2024-11-26 23:40:56.832482] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:08.745 [2024-11-26 23:40:56.832621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72365 ] 00:07:09.003 [2024-11-26 23:40:56.974726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.003 [2024-11-26 23:40:56.999084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.587 23:40:57 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.587 23:40:57 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:09.587 23:40:57 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:09.587 23:40:57 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:09.587 23:40:57 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:09.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:10.119 Waiting for block devices as requested 00:07:10.119 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.119 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.381 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.381 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.687 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:15.687 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.687 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:15.688 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:15.688 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:15.688 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:15.688 23:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:15.688 BYT; 00:07:15.688 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:15.688 BYT; 00:07:15.688 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:15.688 23:41:03 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:15.688 23:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:16.627 The operation has completed successfully. 00:07:16.627 23:41:04 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:17.561 The operation has completed successfully. 00:07:17.561 23:41:05 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:18.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:18.386 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.386 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.644 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.644 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.644 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:18.644 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.644 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.644 [] 00:07:18.644 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.644 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:18.644 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:18.644 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:18.644 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:18.644 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:18.644 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.644 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.904 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.904 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:18.904 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.904 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.904 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.904 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:18.904 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.904 23:41:06 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:18.904 23:41:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.904 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.166 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:19.166 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:19.167 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4a31dad8-2532-4bce-a157-05ce4955e365"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4a31dad8-2532-4bce-a157-05ce4955e365",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d67f7de2-e6d5-4668-b7bc-8ebcea0cace1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d67f7de2-e6d5-4668-b7bc-8ebcea0cace1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "997d4d23-a406-495a-944f-9f09a5917925"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "997d4d23-a406-495a-944f-9f09a5917925",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3a758ca5-eb43-4e3a-99cf-4bdb5c274a68"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3a758ca5-eb43-4e3a-99cf-4bdb5c274a68",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e1cbf289-0569-48b9-b721-6d941acf7642"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e1cbf289-0569-48b9-b721-6d941acf7642",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:19.167 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:19.167 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:19.167 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:19.167 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72365 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72365 ']' 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72365 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72365 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.167 killing process with pid 72365 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72365' 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72365 00:07:19.167 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72365 00:07:19.739 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:19.739 23:41:07 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:19.739 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:19.739 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.739 23:41:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.739 ************************************ 00:07:19.739 START TEST bdev_hello_world 00:07:19.739 ************************************ 00:07:19.739 23:41:07 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:19.739 [2024-11-26 23:41:07.708985] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:19.739 [2024-11-26 23:41:07.709153] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72984 ] 00:07:19.739 [2024-11-26 23:41:07.856376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.001 [2024-11-26 23:41:07.896918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.263 [2024-11-26 23:41:08.338352] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:20.263 [2024-11-26 23:41:08.338429] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:20.263 [2024-11-26 23:41:08.338456] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:20.263 [2024-11-26 23:41:08.341383] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:20.263 [2024-11-26 23:41:08.342450] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:20.263 [2024-11-26 23:41:08.342504] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:20.263 [2024-11-26 23:41:08.343138] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:20.263 00:07:20.263 [2024-11-26 23:41:08.343180] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:20.523 00:07:20.523 real 0m0.976s 00:07:20.523 user 0m0.625s 00:07:20.523 sys 0m0.242s 00:07:20.523 23:41:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.523 ************************************ 00:07:20.523 END TEST bdev_hello_world 00:07:20.523 ************************************ 00:07:20.523 23:41:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:20.784 23:41:08 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:20.784 23:41:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.784 23:41:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.784 23:41:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.784 ************************************ 00:07:20.784 START TEST bdev_bounds 00:07:20.784 ************************************ 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73015 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.784 Process bdevio pid: 73015 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73015' 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73015 00:07:20.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73015 ']' 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.784 23:41:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:20.785 23:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.785 23:41:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:20.785 [2024-11-26 23:41:08.734802] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:20.785 [2024-11-26 23:41:08.734926] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73015 ] 00:07:20.785 [2024-11-26 23:41:08.877845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.785 [2024-11-26 23:41:08.904687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.785 [2024-11-26 23:41:08.904932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.785 [2024-11-26 23:41:08.905028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.725 23:41:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.725 23:41:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:21.725 23:41:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:21.725 I/O targets: 00:07:21.725 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:21.725 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:21.725 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:21.725 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:21.725 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:21.725 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:21.725 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:21.725 00:07:21.725 00:07:21.725 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.725 http://cunit.sourceforge.net/ 00:07:21.725 00:07:21.725 00:07:21.725 Suite: bdevio tests on: Nvme3n1 00:07:21.725 Test: blockdev write read block ...passed 00:07:21.725 Test: blockdev write zeroes read block ...passed 00:07:21.725 Test: blockdev write zeroes read no split ...passed 00:07:21.725 Test: blockdev write zeroes read split ...passed 00:07:21.725 Test: blockdev write zeroes read split partial ...passed 00:07:21.725 Test: blockdev reset ...[2024-11-26 23:41:09.715188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:21.725 passed 00:07:21.725 Test: blockdev write read 8 blocks ...[2024-11-26 23:41:09.718215] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:21.725 passed 00:07:21.725 Test: blockdev write read size > 128k ...passed 00:07:21.725 Test: blockdev write read invalid size ...passed 00:07:21.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.725 Test: blockdev write read max offset ...passed 00:07:21.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.725 Test: blockdev writev readv 8 blocks ...passed 00:07:21.725 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.725 Test: blockdev writev readv block ...passed 00:07:21.725 Test: blockdev writev readv size > 128k ...passed 00:07:21.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.725 Test: blockdev comparev and writev ...[2024-11-26 23:41:09.733652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af80e000 len:0x1000 00:07:21.725 [2024-11-26 23:41:09.733703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:21.725 passed 00:07:21.725 Test: blockdev nvme passthru rw ...passed 00:07:21.725 Test: blockdev nvme passthru vendor specific ...[2024-11-26 23:41:09.735945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:21.725 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:21.725 [2024-11-26 23:41:09.736110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:21.725 passed 00:07:21.725 Test: blockdev copy ...passed 00:07:21.725 Suite: bdevio tests on: Nvme2n3 00:07:21.725 Test: blockdev write read block ...passed 00:07:21.725 Test: blockdev write zeroes read block ...passed 00:07:21.725 Test: blockdev write zeroes read no split ...passed 00:07:21.725 Test: blockdev write zeroes read split ...passed 00:07:21.725 Test: blockdev write zeroes read split partial ...passed 00:07:21.725 Test: blockdev reset ...[2024-11-26 23:41:09.765341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:21.725 [2024-11-26 23:41:09.768528] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:07:21.725 00:07:21.725 Test: blockdev write read 8 blocks ...passed 00:07:21.725 Test: blockdev write read size > 128k ...passed 00:07:21.725 Test: blockdev write read invalid size ...passed 00:07:21.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.725 Test: blockdev write read max offset ...passed 00:07:21.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.725 Test: blockdev writev readv 8 blocks ...passed 00:07:21.725 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.725 Test: blockdev writev readv block ...passed 00:07:21.725 Test: blockdev writev readv size > 128k ...passed 00:07:21.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.725 Test: blockdev comparev and writev ...[2024-11-26 23:41:09.786252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af808000 len:0x1000 00:07:21.725 [2024-11-26 23:41:09.786301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:21.725 passed 00:07:21.725 Test: blockdev nvme passthru rw ...passed 00:07:21.725 Test: blockdev nvme passthru vendor specific ...passed 00:07:21.725 Test: blockdev nvme admin passthru ...[2024-11-26 23:41:09.788541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:21.725 [2024-11-26 23:41:09.788574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:21.725 passed 00:07:21.725 Test: blockdev copy ...passed 00:07:21.725 Suite: bdevio tests on: Nvme2n2 00:07:21.725 Test: blockdev write read block ...passed 00:07:21.725 Test: blockdev write zeroes read block ...passed 00:07:21.725 Test: blockdev write zeroes read no split ...passed 00:07:21.725 Test: blockdev write zeroes read split ...passed 00:07:21.725 Test: blockdev write zeroes read split partial ...passed 00:07:21.725 Test: blockdev reset ...[2024-11-26 23:41:09.817164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:21.725 [2024-11-26 23:41:09.819244] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:21.725 Test: blockdev write read 8 blocks ...uccessful. 00:07:21.725 passed 00:07:21.725 Test: blockdev write read size > 128k ...passed 00:07:21.725 Test: blockdev write read invalid size ...passed 00:07:21.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.725 Test: blockdev write read max offset ...passed 00:07:21.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.725 Test: blockdev writev readv 8 blocks ...passed 00:07:21.725 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.725 Test: blockdev writev readv block ...passed 00:07:21.725 Test: blockdev writev readv size > 128k ...passed 00:07:21.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.725 Test: blockdev comparev and writev ...[2024-11-26 23:41:09.834300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af802000 len:0x1000 00:07:21.725 [2024-11-26 23:41:09.834340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:21.725 passed 00:07:21.725 Test: blockdev nvme passthru rw ...passed 00:07:21.725 Test: blockdev nvme passthru vendor specific ...passed 00:07:21.725 Test: blockdev nvme admin passthru ...[2024-11-26 23:41:09.836148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:21.725 [2024-11-26 23:41:09.836180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:21.725 passed 00:07:21.725 Test: blockdev copy ...passed 00:07:21.725 Suite: bdevio tests on: Nvme2n1 00:07:21.725 Test: blockdev write read block ...passed 00:07:21.725 Test: blockdev write zeroes read block ...passed 00:07:21.725 Test: blockdev write zeroes read no split ...passed 00:07:21.725 Test: blockdev write zeroes read split ...passed 00:07:21.987 Test: blockdev write zeroes read split partial ...passed 00:07:21.987 Test: blockdev reset ...[2024-11-26 23:41:09.855117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:21.987 passed 00:07:21.987 Test: blockdev write read 8 blocks ...[2024-11-26 23:41:09.857560] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:21.987 passed 00:07:21.987 Test: blockdev write read size > 128k ...passed 00:07:21.987 Test: blockdev write read invalid size ...passed 00:07:21.987 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.987 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.987 Test: blockdev write read max offset ...passed 00:07:21.987 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.987 Test: blockdev writev readv 8 blocks ...passed 00:07:21.987 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.987 Test: blockdev writev readv block ...passed 00:07:21.987 Test: blockdev writev readv size > 128k ...passed 00:07:21.987 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.987 Test: blockdev comparev and writev ...[2024-11-26 23:41:09.870905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:21.987 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2afc04000 len:0x1000 00:07:21.987 [2024-11-26 23:41:09.871032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:21.987 passed 00:07:21.987 Test: blockdev nvme passthru vendor specific ...passed 00:07:21.987 Test: blockdev nvme admin passthru ...[2024-11-26 23:41:09.872743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:21.987 [2024-11-26 23:41:09.872777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:21.987 passed 00:07:21.987 Test: blockdev copy ...passed 00:07:21.987 Suite: bdevio tests on: Nvme1n1p2 00:07:21.987 Test: blockdev write read block ...passed 00:07:21.987 Test: blockdev write zeroes read block ...passed 00:07:21.987 Test: blockdev write zeroes read no split ...passed 00:07:21.987 Test: blockdev write zeroes read split ...passed 00:07:21.987 Test: blockdev write zeroes read split partial ...passed 00:07:21.987 Test: blockdev reset ...[2024-11-26 23:41:09.893489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:21.987 passed 00:07:21.987 Test: blockdev write read 8 blocks ...[2024-11-26 23:41:09.895004] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:21.987 passed 00:07:21.987 Test: blockdev write read size > 128k ...passed 00:07:21.987 Test: blockdev write read invalid size ...passed 00:07:21.987 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.987 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.987 Test: blockdev write read max offset ...passed 00:07:21.987 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.987 Test: blockdev writev readv 8 blocks ...passed 00:07:21.987 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.987 Test: blockdev writev readv block ...passed 00:07:21.987 Test: blockdev writev readv size > 128k ...passed 00:07:21.987 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.987 Test: blockdev comparev and writev ...[2024-11-26 23:41:09.904446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cb83d000 len:0x1000 00:07:21.987 [2024-11-26 23:41:09.904482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:21.987 passed 00:07:21.987 Test: blockdev nvme passthru rw ...passed 00:07:21.987 Test: blockdev nvme passthru vendor specific ...passed 00:07:21.987 Test: blockdev nvme admin passthru ...passed 00:07:21.987 Test: blockdev copy ...passed 00:07:21.987 Suite: bdevio tests on: Nvme1n1p1 00:07:21.987 Test: blockdev write read block ...passed 00:07:21.987 Test: blockdev write zeroes read block ...passed 00:07:21.987 Test: blockdev write zeroes read no split ...passed 00:07:21.987 Test: blockdev write zeroes read split ...passed 00:07:21.987 Test: blockdev write zeroes read split partial ...passed 00:07:21.987 Test: blockdev reset ...[2024-11-26 23:41:09.920300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:21.987 passed 00:07:21.987 Test: blockdev write read 8 blocks ...[2024-11-26 23:41:09.921853] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:21.987 passed 00:07:21.987 Test: blockdev write read size > 128k ...passed 00:07:21.987 Test: blockdev write read invalid size ...passed 00:07:21.987 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.987 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.987 Test: blockdev write read max offset ...passed 00:07:21.987 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.987 Test: blockdev writev readv 8 blocks ...passed 00:07:21.987 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.987 Test: blockdev writev readv block ...passed 00:07:21.987 Test: blockdev writev readv size > 128k ...passed 00:07:21.987 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.987 Test: blockdev comparev and writev ...[2024-11-26 23:41:09.936025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2cb839000 len:0x1000 00:07:21.987 [2024-11-26 23:41:09.936061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:21.987 passed 00:07:21.987 Test: blockdev nvme passthru rw ...passed 00:07:21.987 Test: blockdev nvme passthru vendor specific ...passed 00:07:21.987 Test: blockdev nvme admin passthru ...passed 00:07:21.987 Test: blockdev copy ...passed 00:07:21.987 Suite: bdevio tests on: Nvme0n1 00:07:21.987 Test: blockdev write read block ...passed 00:07:21.987 Test: blockdev write zeroes read block ...passed 00:07:21.987 Test: blockdev write zeroes read no split ...passed 00:07:21.987 Test: blockdev write zeroes read split ...passed 00:07:21.987 Test: blockdev write zeroes read split partial ...passed 00:07:21.987 Test: blockdev reset ...[2024-11-26 23:41:09.956683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:21.987 passed 00:07:21.988 Test: blockdev write read 8 blocks ...[2024-11-26 23:41:09.959180] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:21.988 passed 00:07:21.988 Test: blockdev write read size > 128k ...passed 00:07:21.988 Test: blockdev write read invalid size ...passed 00:07:21.988 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:21.988 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:21.988 Test: blockdev write read max offset ...passed 00:07:21.988 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:21.988 Test: blockdev writev readv 8 blocks ...passed 00:07:21.988 Test: blockdev writev readv 30 x 1block ...passed 00:07:21.988 Test: blockdev writev readv block ...passed 00:07:21.988 Test: blockdev writev readv size > 128k ...passed 00:07:21.988 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:21.988 Test: blockdev comparev and writev ...passed 00:07:21.988 Test: blockdev nvme passthru rw ...[2024-11-26 23:41:09.971586] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:21.988 separate metadata which is not supported yet. 00:07:21.988 passed 00:07:21.988 Test: blockdev nvme passthru vendor specific ...[2024-11-26 23:41:09.972731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:21.988 [2024-11-26 23:41:09.972768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:21.988 passed 00:07:21.988 Test: blockdev nvme admin passthru ...passed 00:07:21.988 Test: blockdev copy ...passed 00:07:21.988 00:07:21.988 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.988 suites 7 7 n/a 0 0 00:07:21.988 tests 161 161 161 0 0 00:07:21.988 asserts 1025 1025 1025 0 n/a 00:07:21.988 00:07:21.988 Elapsed time = 0.622 seconds 00:07:21.988 0 00:07:21.988 23:41:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73015 00:07:21.988 23:41:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73015 ']' 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73015 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73015 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73015' 00:07:21.988 killing process with pid 73015 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73015 00:07:21.988 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73015 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:22.250 00:07:22.250 real 0m1.529s 00:07:22.250 user 0m3.880s 00:07:22.250 sys 0m0.286s 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:22.250 ************************************ 00:07:22.250 END TEST bdev_bounds 00:07:22.250 ************************************ 00:07:22.250 23:41:10 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:22.250 23:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:22.250 23:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.250 23:41:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.250 ************************************ 00:07:22.250 START TEST bdev_nbd 00:07:22.250 ************************************ 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:22.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73064 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73064 /var/tmp/spdk-nbd.sock 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73064 ']' 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:22.250 23:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:22.250 [2024-11-26 23:41:10.327558] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:22.250 [2024-11-26 23:41:10.327680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.511 [2024-11-26 23:41:10.468579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.511 [2024-11-26 23:41:10.504733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:23.083 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.344 1+0 records in 00:07:23.344 1+0 records out 00:07:23.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00208264 s, 2.0 MB/s 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:23.344 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.604 1+0 records in 00:07:23.604 1+0 records out 00:07:23.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682438 s, 6.0 MB/s 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:23.604 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.864 1+0 records in 00:07:23.864 1+0 records out 00:07:23.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770514 s, 5.3 MB/s 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:23.864 23:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.144 1+0 records in 00:07:24.144 1+0 records out 00:07:24.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645248 s, 6.3 MB/s 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.144 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.407 1+0 records in 00:07:24.407 1+0 records out 00:07:24.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580739 s, 7.1 MB/s 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.407 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.667 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.668 1+0 records in 00:07:24.668 1+0 records out 00:07:24.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071175 s, 5.8 MB/s 00:07:24.668 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.668 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.668 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.668 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.668 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.668 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:24.668 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.668 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:24.929 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.930 1+0 records in 00:07:24.930 1+0 records out 00:07:24.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900197 s, 4.6 MB/s 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.930 23:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd0", 00:07:25.191 "bdev_name": "Nvme0n1" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd1", 00:07:25.191 "bdev_name": "Nvme1n1p1" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd2", 00:07:25.191 "bdev_name": "Nvme1n1p2" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd3", 00:07:25.191 "bdev_name": "Nvme2n1" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd4", 00:07:25.191 "bdev_name": "Nvme2n2" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd5", 00:07:25.191 "bdev_name": "Nvme2n3" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd6", 00:07:25.191 "bdev_name": "Nvme3n1" 00:07:25.191 } 00:07:25.191 ]' 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd0", 00:07:25.191 "bdev_name": "Nvme0n1" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd1", 00:07:25.191 "bdev_name": "Nvme1n1p1" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd2", 00:07:25.191 "bdev_name": "Nvme1n1p2" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd3", 00:07:25.191 "bdev_name": "Nvme2n1" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd4", 00:07:25.191 "bdev_name": "Nvme2n2" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd5", 00:07:25.191 "bdev_name": "Nvme2n3" 00:07:25.191 }, 00:07:25.191 { 00:07:25.191 "nbd_device": "/dev/nbd6", 00:07:25.191 "bdev_name": "Nvme3n1" 00:07:25.191 } 00:07:25.191 ]' 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.191 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.473 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.734 23:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.995 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:26.255 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.256 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.516 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.776 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:27.036 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.037 23:41:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:27.037 /dev/nbd0 00:07:27.037 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:27.037 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:27.037 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:27.037 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:27.037 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.037 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.037 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.297 1+0 records in 00:07:27.297 1+0 records out 00:07:27.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126126 s, 3.2 MB/s 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:27.297 /dev/nbd1 00:07:27.297 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.298 1+0 records in 00:07:27.298 1+0 records out 00:07:27.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664885 s, 6.2 MB/s 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.298 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:27.560 /dev/nbd10 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.560 1+0 records in 00:07:27.560 1+0 records out 00:07:27.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00088702 s, 4.6 MB/s 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.560 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:27.820 /dev/nbd11 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.820 1+0 records in 00:07:27.820 1+0 records out 00:07:27.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000959746 s, 4.3 MB/s 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.820 23:41:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:28.081 /dev/nbd12 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.081 1+0 records in 00:07:28.081 1+0 records out 00:07:28.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010642 s, 3.8 MB/s 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.081 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:28.344 /dev/nbd13 00:07:28.344 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:28.344 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:28.344 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:28.344 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.345 1+0 records in 00:07:28.345 1+0 records out 00:07:28.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138622 s, 3.0 MB/s 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.345 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:28.611 /dev/nbd14 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.611 1+0 records in 00:07:28.611 1+0 records out 00:07:28.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114516 s, 3.6 MB/s 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.611 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd0", 00:07:28.872 "bdev_name": "Nvme0n1" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd1", 00:07:28.872 "bdev_name": "Nvme1n1p1" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd10", 00:07:28.872 "bdev_name": "Nvme1n1p2" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd11", 00:07:28.872 "bdev_name": "Nvme2n1" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd12", 00:07:28.872 "bdev_name": "Nvme2n2" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd13", 00:07:28.872 "bdev_name": "Nvme2n3" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd14", 00:07:28.872 "bdev_name": "Nvme3n1" 00:07:28.872 } 00:07:28.872 ]' 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd0", 00:07:28.872 "bdev_name": "Nvme0n1" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd1", 00:07:28.872 "bdev_name": "Nvme1n1p1" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd10", 00:07:28.872 "bdev_name": "Nvme1n1p2" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd11", 00:07:28.872 "bdev_name": "Nvme2n1" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd12", 00:07:28.872 "bdev_name": "Nvme2n2" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd13", 00:07:28.872 "bdev_name": "Nvme2n3" 00:07:28.872 }, 00:07:28.872 { 00:07:28.872 "nbd_device": "/dev/nbd14", 00:07:28.872 "bdev_name": "Nvme3n1" 00:07:28.872 } 00:07:28.872 ]' 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:28.872 /dev/nbd1 00:07:28.872 /dev/nbd10 00:07:28.872 /dev/nbd11 00:07:28.872 /dev/nbd12 00:07:28.872 /dev/nbd13 00:07:28.872 /dev/nbd14' 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:28.872 /dev/nbd1 00:07:28.872 /dev/nbd10 00:07:28.872 /dev/nbd11 00:07:28.872 /dev/nbd12 00:07:28.872 /dev/nbd13 00:07:28.872 /dev/nbd14' 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:28.872 256+0 records in 00:07:28.872 256+0 records out 00:07:28.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668557 s, 157 MB/s 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:28.872 23:41:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:29.133 256+0 records in 00:07:29.133 256+0 records out 00:07:29.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221877 s, 4.7 MB/s 00:07:29.133 23:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.133 23:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:29.395 256+0 records in 00:07:29.395 256+0 records out 00:07:29.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223831 s, 4.7 MB/s 00:07:29.395 23:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.395 23:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:29.657 256+0 records in 00:07:29.657 256+0 records out 00:07:29.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227932 s, 4.6 MB/s 00:07:29.657 23:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.657 23:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:29.918 256+0 records in 00:07:29.918 256+0 records out 00:07:29.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227627 s, 4.6 MB/s 00:07:29.918 23:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.918 23:41:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:29.918 256+0 records in 00:07:29.918 256+0 records out 00:07:29.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224763 s, 4.7 MB/s 00:07:29.918 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.918 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:30.179 256+0 records in 00:07:30.179 256+0 records out 00:07:30.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224138 s, 4.7 MB/s 00:07:30.179 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.179 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:30.446 256+0 records in 00:07:30.446 256+0 records out 00:07:30.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22782 s, 4.6 MB/s 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.446 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.717 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.979 23:41:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.250 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:31.515 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:31.515 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:31.515 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:31.515 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.515 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.515 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:31.515 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.515 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.516 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.516 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.776 23:41:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.037 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:32.298 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:32.299 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:32.299 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.299 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:32.299 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:32.561 malloc_lvol_verify 00:07:32.561 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:32.823 5fb8ca56-1017-41e1-a166-b0d27803fdaf 00:07:32.823 23:41:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:33.084 26a196ff-e5e4-4d62-8588-f348ba78ee2f 00:07:33.084 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:33.345 /dev/nbd0 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:33.345 mke2fs 1.47.0 (5-Feb-2023) 00:07:33.345 Discarding device blocks: 0/4096 done 00:07:33.345 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:33.345 00:07:33.345 Allocating group tables: 0/1 done 00:07:33.345 Writing inode tables: 0/1 done 00:07:33.345 Creating journal (1024 blocks): done 00:07:33.345 Writing superblocks and filesystem accounting information: 0/1 done 00:07:33.345 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.345 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.605 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73064 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73064 ']' 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73064 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73064 00:07:33.606 killing process with pid 73064 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73064' 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73064 00:07:33.606 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73064 00:07:33.866 ************************************ 00:07:33.866 END TEST bdev_nbd 00:07:33.866 ************************************ 00:07:33.866 23:41:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:33.866 00:07:33.866 real 0m11.583s 00:07:33.866 user 0m15.919s 00:07:33.866 sys 0m4.057s 00:07:33.866 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.866 23:41:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:33.866 skipping fio tests on NVMe due to multi-ns failures. 00:07:33.866 23:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:33.866 23:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:33.867 23:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:33.867 23:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:33.867 23:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:33.867 23:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:33.867 23:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:33.867 23:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.867 23:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.867 ************************************ 00:07:33.867 START TEST bdev_verify 00:07:33.867 ************************************ 00:07:33.867 23:41:21 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:33.867 [2024-11-26 23:41:21.986057] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:33.867 [2024-11-26 23:41:21.986195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73479 ] 00:07:34.127 [2024-11-26 23:41:22.133114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.127 [2024-11-26 23:41:22.174848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.127 [2024-11-26 23:41:22.174925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.697 Running I/O for 5 seconds... 00:07:37.022 17856.00 IOPS, 69.75 MiB/s [2024-11-26T23:41:26.117Z] 18816.00 IOPS, 73.50 MiB/s [2024-11-26T23:41:27.078Z] 18005.33 IOPS, 70.33 MiB/s [2024-11-26T23:41:28.023Z] 18624.00 IOPS, 72.75 MiB/s [2024-11-26T23:41:28.023Z] 18393.60 IOPS, 71.85 MiB/s 00:07:39.892 Latency(us) 00:07:39.892 [2024-11-26T23:41:28.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.892 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x0 length 0xbd0bd 00:07:39.892 Nvme0n1 : 5.06 1289.65 5.04 0.00 0.00 98926.71 20669.05 93161.94 00:07:39.892 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:39.892 Nvme0n1 : 5.06 1290.11 5.04 0.00 0.00 98831.44 22786.36 82676.18 00:07:39.892 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x0 length 0x4ff80 00:07:39.892 Nvme1n1p1 : 5.06 1288.85 5.03 0.00 0.00 98582.93 22181.42 80659.69 00:07:39.892 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:39.892 Nvme1n1p1 : 5.06 1289.72 5.04 0.00 0.00 98764.95 24399.56 76626.71 00:07:39.892 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x0 length 0x4ff7f 00:07:39.892 Nvme1n1p2 : 5.07 1288.04 5.03 0.00 0.00 98434.94 19761.62 80256.39 00:07:39.892 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:39.892 Nvme1n1p2 : 5.06 1288.91 5.03 0.00 0.00 98670.13 24298.73 79046.50 00:07:39.892 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x0 length 0x80000 00:07:39.892 Nvme2n1 : 5.07 1287.31 5.03 0.00 0.00 98292.34 18450.90 78239.90 00:07:39.892 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x80000 length 0x80000 00:07:39.892 Nvme2n1 : 5.07 1288.13 5.03 0.00 0.00 98532.14 26214.40 79046.50 00:07:39.892 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x0 length 0x80000 00:07:39.892 Nvme2n2 : 5.10 1305.55 5.10 0.00 0.00 96923.91 9830.40 76626.71 00:07:39.892 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x80000 length 0x80000 00:07:39.892 Nvme2n2 : 5.08 1298.25 5.07 0.00 0.00 97727.09 4259.84 77433.30 00:07:39.892 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x0 length 0x80000 00:07:39.892 Nvme2n3 : 5.10 1305.21 5.10 0.00 0.00 96791.52 9326.28 77433.30 00:07:39.892 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x80000 length 0x80000 00:07:39.892 Nvme2n3 : 5.08 1297.90 5.07 0.00 0.00 97556.11 4486.70 77433.30 00:07:39.892 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x0 length 0x20000 00:07:39.892 Nvme3n1 : 5.10 1304.87 5.10 0.00 0.00 96722.60 8771.74 82676.18 00:07:39.892 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.892 Verification LBA range: start 0x20000 length 0x20000 00:07:39.892 Nvme3n1 : 5.08 1297.54 5.07 0.00 0.00 97383.52 4587.52 79853.10 00:07:39.892 [2024-11-26T23:41:28.023Z] =================================================================================================================== 00:07:39.892 [2024-11-26T23:41:28.023Z] Total : 18120.04 70.78 0.00 0.00 98004.08 4259.84 93161.94 00:07:40.467 00:07:40.467 real 0m6.425s 00:07:40.467 user 0m12.004s 00:07:40.467 sys 0m0.312s 00:07:40.467 23:41:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.467 ************************************ 00:07:40.467 23:41:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:40.467 END TEST bdev_verify 00:07:40.467 ************************************ 00:07:40.467 23:41:28 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:40.467 23:41:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:40.467 23:41:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.467 23:41:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.467 ************************************ 00:07:40.467 START TEST bdev_verify_big_io 00:07:40.467 ************************************ 00:07:40.467 23:41:28 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:40.467 [2024-11-26 23:41:28.461348] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:40.467 [2024-11-26 23:41:28.461457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73571 ] 00:07:40.729 [2024-11-26 23:41:28.606128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.729 [2024-11-26 23:41:28.631425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.729 [2024-11-26 23:41:28.631458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.991 Running I/O for 5 seconds... 00:07:46.883 1544.00 IOPS, 96.50 MiB/s [2024-11-26T23:41:35.279Z] 2745.50 IOPS, 171.59 MiB/s [2024-11-26T23:41:35.540Z] 3352.33 IOPS, 209.52 MiB/s 00:07:47.409 Latency(us) 00:07:47.409 [2024-11-26T23:41:35.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.409 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x0 length 0xbd0b 00:07:47.409 Nvme0n1 : 5.80 101.48 6.34 0.00 0.00 1184450.61 37910.06 1342177.28 00:07:47.409 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:47.409 Nvme0n1 : 5.60 114.35 7.15 0.00 0.00 1074616.16 23391.31 1284102.30 00:07:47.409 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x0 length 0x4ff8 00:07:47.409 Nvme1n1p1 : 5.81 101.79 6.36 0.00 0.00 1169055.44 85095.98 1871304.86 00:07:47.409 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:47.409 Nvme1n1p1 : 5.80 115.04 7.19 0.00 0.00 1025600.38 77433.30 1077613.49 00:07:47.409 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x0 length 0x4ff7 00:07:47.409 Nvme1n1p2 : 5.99 110.62 6.91 0.00 0.00 1032181.28 86305.87 1380893.93 00:07:47.409 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:47.409 Nvme1n1p2 : 5.89 118.84 7.43 0.00 0.00 969284.93 89128.96 961463.53 00:07:47.409 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x0 length 0x8000 00:07:47.409 Nvme2n1 : 6.06 114.51 7.16 0.00 0.00 965728.38 98001.53 1393799.48 00:07:47.409 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x8000 length 0x8000 00:07:47.409 Nvme2n1 : 5.96 120.13 7.51 0.00 0.00 934847.52 88725.66 1058255.16 00:07:47.409 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x0 length 0x8000 00:07:47.409 Nvme2n2 : 6.11 114.72 7.17 0.00 0.00 939235.80 47185.92 1845493.76 00:07:47.409 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x8000 length 0x8000 00:07:47.409 Nvme2n2 : 6.06 126.68 7.92 0.00 0.00 861881.11 99211.42 1084066.26 00:07:47.409 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x0 length 0x8000 00:07:47.409 Nvme2n3 : 6.16 127.08 7.94 0.00 0.00 820860.26 14821.22 2039077.02 00:07:47.409 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x8000 length 0x8000 00:07:47.409 Nvme2n3 : 6.15 135.29 8.46 0.00 0.00 786587.48 32868.82 1109877.37 00:07:47.409 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x0 length 0x2000 00:07:47.409 Nvme3n1 : 6.24 185.33 11.58 0.00 0.00 548232.03 1001.94 1522854.99 00:07:47.409 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.409 Verification LBA range: start 0x2000 length 0x2000 00:07:47.409 Nvme3n1 : 6.16 145.55 9.10 0.00 0.00 709290.30 1424.15 1129235.69 00:07:47.409 [2024-11-26T23:41:35.540Z] =================================================================================================================== 00:07:47.409 [2024-11-26T23:41:35.540Z] Total : 1731.41 108.21 0.00 0.00 899342.88 1001.94 2039077.02 00:07:47.981 00:07:47.981 real 0m7.694s 00:07:47.981 user 0m14.678s 00:07:47.981 sys 0m0.235s 00:07:47.981 23:41:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.981 23:41:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:47.981 ************************************ 00:07:47.981 END TEST bdev_verify_big_io 00:07:47.981 ************************************ 00:07:48.241 23:41:36 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:48.241 23:41:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:48.241 23:41:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.241 23:41:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.241 ************************************ 00:07:48.241 START TEST bdev_write_zeroes 00:07:48.241 ************************************ 00:07:48.241 23:41:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:48.241 [2024-11-26 23:41:36.208473] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:48.241 [2024-11-26 23:41:36.208571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73675 ] 00:07:48.241 [2024-11-26 23:41:36.343520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.241 [2024-11-26 23:41:36.367717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.808 Running I/O for 1 seconds... 00:07:49.745 63168.00 IOPS, 246.75 MiB/s 00:07:49.745 Latency(us) 00:07:49.745 [2024-11-26T23:41:37.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.745 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.745 Nvme0n1 : 1.02 9010.51 35.20 0.00 0.00 14176.10 6755.25 25206.15 00:07:49.745 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.745 Nvme1n1p1 : 1.02 8999.46 35.15 0.00 0.00 14173.56 10485.76 24601.21 00:07:49.745 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.745 Nvme1n1p2 : 1.03 8988.56 35.11 0.00 0.00 14145.48 10536.17 23794.61 00:07:49.745 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.745 Nvme2n1 : 1.03 8978.48 35.07 0.00 0.00 14138.03 10889.06 23492.14 00:07:49.745 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.745 Nvme2n2 : 1.03 8968.37 35.03 0.00 0.00 14097.63 9376.69 22584.71 00:07:49.745 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.745 Nvme2n3 : 1.03 8958.36 34.99 0.00 0.00 14080.42 8318.03 23693.78 00:07:49.745 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.745 Nvme3n1 : 1.03 8948.32 34.95 0.00 0.00 14070.85 7561.85 25306.98 00:07:49.745 [2024-11-26T23:41:37.876Z] =================================================================================================================== 00:07:49.745 [2024-11-26T23:41:37.876Z] Total : 62852.07 245.52 0.00 0.00 14126.01 6755.25 25306.98 00:07:50.005 00:07:50.005 real 0m1.844s 00:07:50.005 user 0m1.557s 00:07:50.005 sys 0m0.178s 00:07:50.005 ************************************ 00:07:50.005 END TEST bdev_write_zeroes 00:07:50.005 ************************************ 00:07:50.005 23:41:37 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.005 23:41:37 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:50.005 23:41:38 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.005 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:50.005 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.005 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.005 ************************************ 00:07:50.005 START TEST bdev_json_nonenclosed 00:07:50.005 ************************************ 00:07:50.005 23:41:38 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.005 [2024-11-26 23:41:38.120170] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:50.005 [2024-11-26 23:41:38.120300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73717 ] 00:07:50.267 [2024-11-26 23:41:38.263981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.267 [2024-11-26 23:41:38.290969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.267 [2024-11-26 23:41:38.291069] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:50.267 [2024-11-26 23:41:38.291086] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:50.267 [2024-11-26 23:41:38.291100] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.267 00:07:50.267 real 0m0.306s 00:07:50.267 user 0m0.120s 00:07:50.267 sys 0m0.084s 00:07:50.267 23:41:38 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.267 23:41:38 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:50.267 ************************************ 00:07:50.267 END TEST bdev_json_nonenclosed 00:07:50.267 ************************************ 00:07:50.539 23:41:38 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.539 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:50.539 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.539 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.539 ************************************ 00:07:50.539 START TEST bdev_json_nonarray 00:07:50.539 ************************************ 00:07:50.539 23:41:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.539 [2024-11-26 23:41:38.500766] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:50.539 [2024-11-26 23:41:38.500932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73737 ] 00:07:50.539 [2024-11-26 23:41:38.650433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.806 [2024-11-26 23:41:38.676969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.806 [2024-11-26 23:41:38.677081] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:50.806 [2024-11-26 23:41:38.677101] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:50.806 [2024-11-26 23:41:38.677117] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.806 00:07:50.806 real 0m0.314s 00:07:50.806 user 0m0.117s 00:07:50.806 sys 0m0.093s 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:50.806 ************************************ 00:07:50.806 END TEST bdev_json_nonarray 00:07:50.806 ************************************ 00:07:50.806 23:41:38 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:50.806 23:41:38 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:50.806 23:41:38 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:50.806 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.806 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.806 23:41:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.806 ************************************ 00:07:50.806 START TEST bdev_gpt_uuid 00:07:50.806 ************************************ 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73757 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73757 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73757 ']' 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.806 23:41:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:50.806 [2024-11-26 23:41:38.887657] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:07:50.806 [2024-11-26 23:41:38.887786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73757 ] 00:07:51.068 [2024-11-26 23:41:39.030511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.068 [2024-11-26 23:41:39.067930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.640 23:41:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.640 23:41:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:51.640 23:41:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:51.640 23:41:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.640 23:41:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.208 Some configs were skipped because the RPC state that can call them passed over. 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:52.208 { 00:07:52.208 "name": "Nvme1n1p1", 00:07:52.208 "aliases": [ 00:07:52.208 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:52.208 ], 00:07:52.208 "product_name": "GPT Disk", 00:07:52.208 "block_size": 4096, 00:07:52.208 "num_blocks": 655104, 00:07:52.208 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:52.208 "assigned_rate_limits": { 00:07:52.208 "rw_ios_per_sec": 0, 00:07:52.208 "rw_mbytes_per_sec": 0, 00:07:52.208 "r_mbytes_per_sec": 0, 00:07:52.208 "w_mbytes_per_sec": 0 00:07:52.208 }, 00:07:52.208 "claimed": false, 00:07:52.208 "zoned": false, 00:07:52.208 "supported_io_types": { 00:07:52.208 "read": true, 00:07:52.208 "write": true, 00:07:52.208 "unmap": true, 00:07:52.208 "flush": true, 00:07:52.208 "reset": true, 00:07:52.208 "nvme_admin": false, 00:07:52.208 "nvme_io": false, 00:07:52.208 "nvme_io_md": false, 00:07:52.208 "write_zeroes": true, 00:07:52.208 "zcopy": false, 00:07:52.208 "get_zone_info": false, 00:07:52.208 "zone_management": false, 00:07:52.208 "zone_append": false, 00:07:52.208 "compare": true, 00:07:52.208 "compare_and_write": false, 00:07:52.208 "abort": true, 00:07:52.208 "seek_hole": false, 00:07:52.208 "seek_data": false, 00:07:52.208 "copy": true, 00:07:52.208 "nvme_iov_md": false 00:07:52.208 }, 00:07:52.208 "driver_specific": { 00:07:52.208 "gpt": { 00:07:52.208 "base_bdev": "Nvme1n1", 00:07:52.208 "offset_blocks": 256, 00:07:52.208 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:52.208 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:52.208 "partition_name": "SPDK_TEST_first" 00:07:52.208 } 00:07:52.208 } 00:07:52.208 } 00:07:52.208 ]' 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.208 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:52.208 { 00:07:52.208 "name": "Nvme1n1p2", 00:07:52.208 "aliases": [ 00:07:52.208 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:52.208 ], 00:07:52.208 "product_name": "GPT Disk", 00:07:52.208 "block_size": 4096, 00:07:52.208 "num_blocks": 655103, 00:07:52.208 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:52.208 "assigned_rate_limits": { 00:07:52.208 "rw_ios_per_sec": 0, 00:07:52.208 "rw_mbytes_per_sec": 0, 00:07:52.208 "r_mbytes_per_sec": 0, 00:07:52.208 "w_mbytes_per_sec": 0 00:07:52.208 }, 00:07:52.208 "claimed": false, 00:07:52.208 "zoned": false, 00:07:52.208 "supported_io_types": { 00:07:52.208 "read": true, 00:07:52.208 "write": true, 00:07:52.208 "unmap": true, 00:07:52.208 "flush": true, 00:07:52.208 "reset": true, 00:07:52.208 "nvme_admin": false, 00:07:52.208 "nvme_io": false, 00:07:52.208 "nvme_io_md": false, 00:07:52.208 "write_zeroes": true, 00:07:52.208 "zcopy": false, 00:07:52.208 "get_zone_info": false, 00:07:52.208 "zone_management": false, 00:07:52.208 "zone_append": false, 00:07:52.208 "compare": true, 00:07:52.208 "compare_and_write": false, 00:07:52.208 "abort": true, 00:07:52.209 "seek_hole": false, 00:07:52.209 "seek_data": false, 00:07:52.209 "copy": true, 00:07:52.209 "nvme_iov_md": false 00:07:52.209 }, 00:07:52.209 "driver_specific": { 00:07:52.209 "gpt": { 00:07:52.209 "base_bdev": "Nvme1n1", 00:07:52.209 "offset_blocks": 655360, 00:07:52.209 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:52.209 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:52.209 "partition_name": "SPDK_TEST_second" 00:07:52.209 } 00:07:52.209 } 00:07:52.209 } 00:07:52.209 ]' 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 73757 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73757 ']' 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73757 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73757 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.209 killing process with pid 73757 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73757' 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73757 00:07:52.209 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73757 00:07:52.779 00:07:52.779 real 0m2.010s 00:07:52.779 user 0m2.011s 00:07:52.779 sys 0m0.506s 00:07:52.779 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.779 ************************************ 00:07:52.779 END TEST bdev_gpt_uuid 00:07:52.779 ************************************ 00:07:52.779 23:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:52.779 23:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:53.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:53.352 Waiting for block devices as requested 00:07:53.352 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.613 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.613 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.613 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:58.901 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:58.901 23:41:46 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:58.901 23:41:46 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:59.165 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:59.165 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:59.165 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:59.165 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:59.165 23:41:47 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:59.165 00:07:59.165 real 0m50.477s 00:07:59.165 user 1m2.892s 00:07:59.165 sys 0m8.637s 00:07:59.165 23:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.166 ************************************ 00:07:59.166 END TEST blockdev_nvme_gpt 00:07:59.166 ************************************ 00:07:59.166 23:41:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.166 23:41:47 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:59.166 23:41:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.166 23:41:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.166 23:41:47 -- common/autotest_common.sh@10 -- # set +x 00:07:59.166 ************************************ 00:07:59.166 START TEST nvme 00:07:59.166 ************************************ 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:59.166 * Looking for test storage... 00:07:59.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.166 23:41:47 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.166 23:41:47 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.166 23:41:47 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.166 23:41:47 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.166 23:41:47 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.166 23:41:47 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.166 23:41:47 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.166 23:41:47 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.166 23:41:47 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.166 23:41:47 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.166 23:41:47 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.166 23:41:47 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:59.166 23:41:47 nvme -- scripts/common.sh@345 -- # : 1 00:07:59.166 23:41:47 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.166 23:41:47 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.166 23:41:47 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:59.166 23:41:47 nvme -- scripts/common.sh@353 -- # local d=1 00:07:59.166 23:41:47 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.166 23:41:47 nvme -- scripts/common.sh@355 -- # echo 1 00:07:59.166 23:41:47 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.166 23:41:47 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:59.166 23:41:47 nvme -- scripts/common.sh@353 -- # local d=2 00:07:59.166 23:41:47 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.166 23:41:47 nvme -- scripts/common.sh@355 -- # echo 2 00:07:59.166 23:41:47 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.166 23:41:47 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.166 23:41:47 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.166 23:41:47 nvme -- scripts/common.sh@368 -- # return 0 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.166 --rc genhtml_branch_coverage=1 00:07:59.166 --rc genhtml_function_coverage=1 00:07:59.166 --rc genhtml_legend=1 00:07:59.166 --rc geninfo_all_blocks=1 00:07:59.166 --rc geninfo_unexecuted_blocks=1 00:07:59.166 00:07:59.166 ' 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.166 --rc genhtml_branch_coverage=1 00:07:59.166 --rc genhtml_function_coverage=1 00:07:59.166 --rc genhtml_legend=1 00:07:59.166 --rc geninfo_all_blocks=1 00:07:59.166 --rc geninfo_unexecuted_blocks=1 00:07:59.166 00:07:59.166 ' 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.166 --rc genhtml_branch_coverage=1 00:07:59.166 --rc genhtml_function_coverage=1 00:07:59.166 --rc genhtml_legend=1 00:07:59.166 --rc geninfo_all_blocks=1 00:07:59.166 --rc geninfo_unexecuted_blocks=1 00:07:59.166 00:07:59.166 ' 00:07:59.166 23:41:47 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.166 --rc genhtml_branch_coverage=1 00:07:59.166 --rc genhtml_function_coverage=1 00:07:59.166 --rc genhtml_legend=1 00:07:59.166 --rc geninfo_all_blocks=1 00:07:59.166 --rc geninfo_unexecuted_blocks=1 00:07:59.166 00:07:59.166 ' 00:07:59.166 23:41:47 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:59.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:00.313 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:00.313 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:00.313 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:00.313 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:00.574 23:41:48 nvme -- nvme/nvme.sh@79 -- # uname 00:08:00.574 23:41:48 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:00.574 23:41:48 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:00.574 23:41:48 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1075 -- # stubpid=74389 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:00.574 Waiting for stub to ready for secondary processes... 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74389 ]] 00:08:00.574 23:41:48 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:00.574 [2024-11-26 23:41:48.519577] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:08:00.574 [2024-11-26 23:41:48.519732] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:01.516 23:41:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:01.516 23:41:49 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74389 ]] 00:08:01.516 23:41:49 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:02.088 [2024-11-26 23:41:50.061011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.088 [2024-11-26 23:41:50.084768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.088 [2024-11-26 23:41:50.085197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.088 [2024-11-26 23:41:50.085263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.088 [2024-11-26 23:41:50.099493] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:02.088 [2024-11-26 23:41:50.099544] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:02.088 [2024-11-26 23:41:50.117687] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:02.089 [2024-11-26 23:41:50.118174] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:02.089 [2024-11-26 23:41:50.120073] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:02.089 [2024-11-26 23:41:50.120484] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:02.089 [2024-11-26 23:41:50.120639] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:02.089 [2024-11-26 23:41:50.122900] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:02.089 [2024-11-26 23:41:50.123309] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:02.089 [2024-11-26 23:41:50.123422] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:02.089 [2024-11-26 23:41:50.126271] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:02.089 [2024-11-26 23:41:50.126478] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:02.089 [2024-11-26 23:41:50.126553] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:02.089 [2024-11-26 23:41:50.126635] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:02.089 [2024-11-26 23:41:50.126676] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:02.661 done. 00:08:02.661 23:41:50 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:02.661 23:41:50 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:02.661 23:41:50 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:02.661 23:41:50 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:02.661 23:41:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.661 23:41:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.661 ************************************ 00:08:02.661 START TEST nvme_reset 00:08:02.661 ************************************ 00:08:02.661 23:41:50 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:02.661 Initializing NVMe Controllers 00:08:02.661 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:02.661 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:02.661 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:02.661 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:02.661 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:02.661 ************************************ 00:08:02.661 END TEST nvme_reset 00:08:02.661 ************************************ 00:08:02.661 00:08:02.661 real 0m0.211s 00:08:02.661 user 0m0.061s 00:08:02.661 sys 0m0.102s 00:08:02.661 23:41:50 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.661 23:41:50 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:02.661 23:41:50 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:02.661 23:41:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.661 23:41:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.661 23:41:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.661 ************************************ 00:08:02.661 START TEST nvme_identify 00:08:02.661 ************************************ 00:08:02.661 23:41:50 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:02.661 23:41:50 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:02.661 23:41:50 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:02.661 23:41:50 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:02.926 23:41:50 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:02.926 23:41:50 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:02.926 23:41:50 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:02.926 23:41:50 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:02.926 23:41:50 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:02.926 23:41:50 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:02.926 23:41:50 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:02.926 23:41:50 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:02.926 23:41:50 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:02.926 [2024-11-26 23:41:51.019274] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74421 terminated unexpected 00:08:02.926 ===================================================== 00:08:02.926 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:02.926 ===================================================== 00:08:02.926 Controller Capabilities/Features 00:08:02.926 ================================ 00:08:02.926 Vendor ID: 1b36 00:08:02.926 Subsystem Vendor ID: 1af4 00:08:02.926 Serial Number: 12343 00:08:02.926 Model Number: QEMU NVMe Ctrl 00:08:02.926 Firmware Version: 8.0.0 00:08:02.926 Recommended Arb Burst: 6 00:08:02.926 IEEE OUI Identifier: 00 54 52 00:08:02.926 Multi-path I/O 00:08:02.926 May have multiple subsystem ports: No 00:08:02.926 May have multiple controllers: Yes 00:08:02.926 Associated with SR-IOV VF: No 00:08:02.926 Max Data Transfer Size: 524288 00:08:02.926 Max Number of Namespaces: 256 00:08:02.926 Max Number of I/O Queues: 64 00:08:02.926 NVMe Specification Version (VS): 1.4 00:08:02.926 NVMe Specification Version (Identify): 1.4 00:08:02.926 Maximum Queue Entries: 2048 00:08:02.926 Contiguous Queues Required: Yes 00:08:02.926 Arbitration Mechanisms Supported 00:08:02.926 Weighted Round Robin: Not Supported 00:08:02.926 Vendor Specific: Not Supported 00:08:02.926 Reset Timeout: 7500 ms 00:08:02.926 Doorbell Stride: 4 bytes 00:08:02.926 NVM Subsystem Reset: Not Supported 00:08:02.926 Command Sets Supported 00:08:02.926 NVM Command Set: Supported 00:08:02.926 Boot Partition: Not Supported 00:08:02.926 Memory Page Size Minimum: 4096 bytes 00:08:02.926 Memory Page Size Maximum: 65536 bytes 00:08:02.926 Persistent Memory Region: Not Supported 00:08:02.926 Optional Asynchronous Events Supported 00:08:02.926 Namespace Attribute Notices: Supported 00:08:02.926 Firmware Activation Notices: Not Supported 00:08:02.926 ANA Change Notices: Not Supported 00:08:02.926 PLE Aggregate Log Change Notices: Not Supported 00:08:02.926 LBA Status Info Alert Notices: Not Supported 00:08:02.926 EGE Aggregate Log Change Notices: Not Supported 00:08:02.926 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.926 Zone Descriptor Change Notices: Not Supported 00:08:02.926 Discovery Log Change Notices: Not Supported 00:08:02.926 Controller Attributes 00:08:02.926 128-bit Host Identifier: Not Supported 00:08:02.926 Non-Operational Permissive Mode: Not Supported 00:08:02.926 NVM Sets: Not Supported 00:08:02.926 Read Recovery Levels: Not Supported 00:08:02.926 Endurance Groups: Supported 00:08:02.926 Predictable Latency Mode: Not Supported 00:08:02.926 Traffic Based Keep ALive: Not Supported 00:08:02.926 Namespace Granularity: Not Supported 00:08:02.926 SQ Associations: Not Supported 00:08:02.926 UUID List: Not Supported 00:08:02.926 Multi-Domain Subsystem: Not Supported 00:08:02.926 Fixed Capacity Management: Not Supported 00:08:02.926 Variable Capacity Management: Not Supported 00:08:02.926 Delete Endurance Group: Not Supported 00:08:02.926 Delete NVM Set: Not Supported 00:08:02.927 Extended LBA Formats Supported: Supported 00:08:02.927 Flexible Data Placement Supported: Supported 00:08:02.927 00:08:02.927 Controller Memory Buffer Support 00:08:02.927 ================================ 00:08:02.927 Supported: No 00:08:02.927 00:08:02.927 Persistent Memory Region Support 00:08:02.927 ================================ 00:08:02.927 Supported: No 00:08:02.927 00:08:02.927 Admin Command Set Attributes 00:08:02.927 ============================ 00:08:02.927 Security Send/Receive: Not Supported 00:08:02.927 Format NVM: Supported 00:08:02.927 Firmware Activate/Download: Not Supported 00:08:02.927 Namespace Management: Supported 00:08:02.927 Device Self-Test: Not Supported 00:08:02.927 Directives: Supported 00:08:02.927 NVMe-MI: Not Supported 00:08:02.927 Virtualization Management: Not Supported 00:08:02.927 Doorbell Buffer Config: Supported 00:08:02.927 Get LBA Status Capability: Not Supported 00:08:02.927 Command & Feature Lockdown Capability: Not Supported 00:08:02.927 Abort Command Limit: 4 00:08:02.927 Async Event Request Limit: 4 00:08:02.927 Number of Firmware Slots: N/A 00:08:02.927 Firmware Slot 1 Read-Only: N/A 00:08:02.927 Firmware Activation Without Reset: N/A 00:08:02.927 Multiple Update Detection Support: N/A 00:08:02.927 Firmware Update Granularity: No Information Provided 00:08:02.927 Per-Namespace SMART Log: Yes 00:08:02.927 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.927 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:02.927 Command Effects Log Page: Supported 00:08:02.927 Get Log Page Extended Data: Supported 00:08:02.927 Telemetry Log Pages: Not Supported 00:08:02.927 Persistent Event Log Pages: Not Supported 00:08:02.927 Supported Log Pages Log Page: May Support 00:08:02.927 Commands Supported & Effects Log Page: Not Supported 00:08:02.927 Feature Identifiers & Effects Log Page:May Support 00:08:02.927 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.927 Data Area 4 for Telemetry Log: Not Supported 00:08:02.927 Error Log Page Entries Supported: 1 00:08:02.927 Keep Alive: Not Supported 00:08:02.927 00:08:02.927 NVM Command Set Attributes 00:08:02.927 ========================== 00:08:02.927 Submission Queue Entry Size 00:08:02.927 Max: 64 00:08:02.927 Min: 64 00:08:02.927 Completion Queue Entry Size 00:08:02.927 Max: 16 00:08:02.927 Min: 16 00:08:02.927 Number of Namespaces: 256 00:08:02.927 Compare Command: Supported 00:08:02.927 Write Uncorrectable Command: Not Supported 00:08:02.927 Dataset Management Command: Supported 00:08:02.927 Write Zeroes Command: Supported 00:08:02.927 Set Features Save Field: Supported 00:08:02.927 Reservations: Not Supported 00:08:02.927 Timestamp: Supported 00:08:02.927 Copy: Supported 00:08:02.927 Volatile Write Cache: Present 00:08:02.927 Atomic Write Unit (Normal): 1 00:08:02.927 Atomic Write Unit (PFail): 1 00:08:02.927 Atomic Compare & Write Unit: 1 00:08:02.927 Fused Compare & Write: Not Supported 00:08:02.927 Scatter-Gather List 00:08:02.927 SGL Command Set: Supported 00:08:02.927 SGL Keyed: Not Supported 00:08:02.927 SGL Bit Bucket Descriptor: Not Supported 00:08:02.927 SGL Metadata Pointer: Not Supported 00:08:02.927 Oversized SGL: Not Supported 00:08:02.927 SGL Metadata Address: Not Supported 00:08:02.927 SGL Offset: Not Supported 00:08:02.927 Transport SGL Data Block: Not Supported 00:08:02.927 Replay Protected Memory Block: Not Supported 00:08:02.927 00:08:02.927 Firmware Slot Information 00:08:02.927 ========================= 00:08:02.927 Active slot: 1 00:08:02.927 Slot 1 Firmware Revision: 1.0 00:08:02.927 00:08:02.927 00:08:02.927 Commands Supported and Effects 00:08:02.927 ============================== 00:08:02.927 Admin Commands 00:08:02.927 -------------- 00:08:02.927 Delete I/O Submission Queue (00h): Supported 00:08:02.927 Create I/O Submission Queue (01h): Supported 00:08:02.927 Get Log Page (02h): Supported 00:08:02.927 Delete I/O Completion Queue (04h): Supported 00:08:02.927 Create I/O Completion Queue (05h): Supported 00:08:02.927 Identify (06h): Supported 00:08:02.927 Abort (08h): Supported 00:08:02.927 Set Features (09h): Supported 00:08:02.927 Get Features (0Ah): Supported 00:08:02.927 Asynchronous Event Request (0Ch): Supported 00:08:02.927 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.927 Directive Send (19h): Supported 00:08:02.927 Directive Receive (1Ah): Supported 00:08:02.927 Virtualization Management (1Ch): Supported 00:08:02.927 Doorbell Buffer Config (7Ch): Supported 00:08:02.927 Format NVM (80h): Supported LBA-Change 00:08:02.927 I/O Commands 00:08:02.927 ------------ 00:08:02.927 Flush (00h): Supported LBA-Change 00:08:02.927 Write (01h): Supported LBA-Change 00:08:02.927 Read (02h): Supported 00:08:02.927 Compare (05h): Supported 00:08:02.927 Write Zeroes (08h): Supported LBA-Change 00:08:02.927 Dataset Management (09h): Supported LBA-Change 00:08:02.927 Unknown (0Ch): Supported 00:08:02.927 Unknown (12h): Supported 00:08:02.927 Copy (19h): Supported LBA-Change 00:08:02.927 Unknown (1Dh): Supported LBA-Change 00:08:02.927 00:08:02.927 Error Log 00:08:02.927 ========= 00:08:02.927 00:08:02.927 Arbitration 00:08:02.927 =========== 00:08:02.927 Arbitration Burst: no limit 00:08:02.927 00:08:02.927 Power Management 00:08:02.927 ================ 00:08:02.927 Number of Power States: 1 00:08:02.927 Current Power State: Power State #0 00:08:02.927 Power State #0: 00:08:02.927 Max Power: 25.00 W 00:08:02.927 Non-Operational State: Operational 00:08:02.927 Entry Latency: 16 microseconds 00:08:02.927 Exit Latency: 4 microseconds 00:08:02.927 Relative Read Throughput: 0 00:08:02.927 Relative Read Latency: 0 00:08:02.927 Relative Write Throughput: 0 00:08:02.927 Relative Write Latency: 0 00:08:02.927 Idle Power: Not Reported 00:08:02.927 Active Power: Not Reported 00:08:02.927 Non-Operational Permissive Mode: Not Supported 00:08:02.927 00:08:02.927 Health Information 00:08:02.927 ================== 00:08:02.927 Critical Warnings: 00:08:02.927 Available Spare Space: OK 00:08:02.927 Temperature: OK 00:08:02.927 Device Reliability: OK 00:08:02.927 Read Only: No 00:08:02.927 Volatile Memory Backup: OK 00:08:02.927 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.927 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.927 Available Spare: 0% 00:08:02.927 Available Spare Threshold: 0% 00:08:02.927 Life Percentage Used: 0% 00:08:02.927 Data Units Read: 963 00:08:02.927 Data Units Written: 892 00:08:02.927 Host Read Commands: 40598 00:08:02.927 Host Write Commands: 40022 00:08:02.927 Controller Busy Time: 0 minutes 00:08:02.927 Power Cycles: 0 00:08:02.927 Power On Hours: 0 hours 00:08:02.927 Unsafe Shutdowns: 0 00:08:02.927 Unrecoverable Media Errors: 0 00:08:02.927 Lifetime Error Log Entries: 0 00:08:02.927 Warning Temperature Time: 0 minutes 00:08:02.927 Critical Temperature Time: 0 minutes 00:08:02.927 00:08:02.927 Number of Queues 00:08:02.927 ================ 00:08:02.927 Number of I/O Submission Queues: 64 00:08:02.927 Number of I/O Completion Queues: 64 00:08:02.927 00:08:02.927 ZNS Specific Controller Data 00:08:02.927 ============================ 00:08:02.927 Zone Append Size Limit: 0 00:08:02.927 00:08:02.927 00:08:02.927 Active Namespaces 00:08:02.927 ================= 00:08:02.927 Namespace ID:1 00:08:02.927 Error Recovery Timeout: Unlimited 00:08:02.927 Command Set Identifier: NVM (00h) 00:08:02.927 Deallocate: Supported 00:08:02.927 Deallocated/Unwritten Error: Supported 00:08:02.927 Deallocated Read Value: All 0x00 00:08:02.927 Deallocate in Write Zeroes: Not Supported 00:08:02.927 Deallocated Guard Field: 0xFFFF 00:08:02.927 Flush: Supported 00:08:02.927 Reservation: Not Supported 00:08:02.927 Namespace Sharing Capabilities: Multiple Controllers 00:08:02.927 Size (in LBAs): 262144 (1GiB) 00:08:02.927 Capacity (in LBAs): 262144 (1GiB) 00:08:02.927 Utilization (in LBAs): 262144 (1GiB) 00:08:02.927 Thin Provisioning: Not Supported 00:08:02.927 Per-NS Atomic Units: No 00:08:02.927 Maximum Single Source Range Length: 128 00:08:02.927 Maximum Copy Length: 128 00:08:02.927 Maximum Source Range Count: 128 00:08:02.927 NGUID/EUI64 Never Reused: No 00:08:02.927 Namespace Write Protected: No 00:08:02.927 Endurance group ID: 1 00:08:02.927 Number of LBA Formats: 8 00:08:02.927 Current LBA Format: LBA Format #04 00:08:02.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.927 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.927 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.927 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.927 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.927 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.927 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.928 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.928 00:08:02.928 Get Feature FDP: 00:08:02.928 ================ 00:08:02.928 Enabled: Yes 00:08:02.928 FDP configuration index: 0 00:08:02.928 00:08:02.928 FDP configurations log page 00:08:02.928 =========================== 00:08:02.928 Number of FDP configurations: 1 00:08:02.928 Version: 0 00:08:02.928 Size: 112 00:08:02.928 FDP Configuration Descriptor: 0 00:08:02.928 Descriptor Size: 96 00:08:02.928 Reclaim Group Identifier format: 2 00:08:02.928 FDP Volatile Write Cache: Not Present 00:08:02.928 FDP Configuration: Valid 00:08:02.928 Vendor Specific Size: 0 00:08:02.928 Number of Reclaim Groups: 2 00:08:02.928 Number of Recalim Unit Handles: 8 00:08:02.928 Max Placement Identifiers: 128 00:08:02.928 Number of Namespaces Suppprted: 256 00:08:02.928 Reclaim unit Nominal Size: 6000000 bytes 00:08:02.928 Estimated Reclaim Unit Time Limit: Not Reported 00:08:02.928 RUH Desc #000: RUH Type: Initially Isolated 00:08:02.928 RUH Desc #001: RUH Type: Initially Isolated 00:08:02.928 RUH Desc #002: RUH Type: Initially Isolated 00:08:02.928 RUH Desc #003: RUH Type: Initially Isolated 00:08:02.928 RUH Desc #004: RUH Type: Initially Isolated 00:08:02.928 RUH Desc #005: RUH Type: Initially Isolated 00:08:02.928 RUH Desc #006: RUH Type: Initially Isolated 00:08:02.928 RUH Desc #007: RUH Type: Initially Isolated 00:08:02.928 00:08:02.928 FDP reclaim unit handle usage log page 00:08:02.928 ==================================[2024-11-26 23:41:51.023300] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74421 terminated unexpected 00:08:02.928 ==== 00:08:02.928 Number of Reclaim Unit Handles: 8 00:08:02.928 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:02.928 RUH Usage Desc #001: RUH Attributes: Unused 00:08:02.928 RUH Usage Desc #002: RUH Attributes: Unused 00:08:02.928 RUH Usage Desc #003: RUH Attributes: Unused 00:08:02.928 RUH Usage Desc #004: RUH Attributes: Unused 00:08:02.928 RUH Usage Desc #005: RUH Attributes: Unused 00:08:02.928 RUH Usage Desc #006: RUH Attributes: Unused 00:08:02.928 RUH Usage Desc #007: RUH Attributes: Unused 00:08:02.928 00:08:02.928 FDP statistics log page 00:08:02.928 ======================= 00:08:02.928 Host bytes with metadata written: 560111616 00:08:02.928 Media bytes with metadata written: 560189440 00:08:02.928 Media bytes erased: 0 00:08:02.928 00:08:02.928 FDP events log page 00:08:02.928 =================== 00:08:02.928 Number of FDP events: 0 00:08:02.928 00:08:02.928 NVM Specific Namespace Data 00:08:02.928 =========================== 00:08:02.928 Logical Block Storage Tag Mask: 0 00:08:02.928 Protection Information Capabilities: 00:08:02.928 16b Guard Protection Information Storage Tag Support: No 00:08:02.928 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.928 Storage Tag Check Read Support: No 00:08:02.928 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.928 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.928 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.928 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.928 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.928 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.928 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.928 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.928 ===================================================== 00:08:02.928 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:02.928 ===================================================== 00:08:02.928 Controller Capabilities/Features 00:08:02.928 ================================ 00:08:02.928 Vendor ID: 1b36 00:08:02.928 Subsystem Vendor ID: 1af4 00:08:02.928 Serial Number: 12341 00:08:02.928 Model Number: QEMU NVMe Ctrl 00:08:02.928 Firmware Version: 8.0.0 00:08:02.928 Recommended Arb Burst: 6 00:08:02.928 IEEE OUI Identifier: 00 54 52 00:08:02.928 Multi-path I/O 00:08:02.928 May have multiple subsystem ports: No 00:08:02.928 May have multiple controllers: No 00:08:02.928 Associated with SR-IOV VF: No 00:08:02.928 Max Data Transfer Size: 524288 00:08:02.928 Max Number of Namespaces: 256 00:08:02.928 Max Number of I/O Queues: 64 00:08:02.928 NVMe Specification Version (VS): 1.4 00:08:02.928 NVMe Specification Version (Identify): 1.4 00:08:02.928 Maximum Queue Entries: 2048 00:08:02.928 Contiguous Queues Required: Yes 00:08:02.928 Arbitration Mechanisms Supported 00:08:02.928 Weighted Round Robin: Not Supported 00:08:02.928 Vendor Specific: Not Supported 00:08:02.928 Reset Timeout: 7500 ms 00:08:02.928 Doorbell Stride: 4 bytes 00:08:02.928 NVM Subsystem Reset: Not Supported 00:08:02.928 Command Sets Supported 00:08:02.928 NVM Command Set: Supported 00:08:02.928 Boot Partition: Not Supported 00:08:02.928 Memory Page Size Minimum: 4096 bytes 00:08:02.928 Memory Page Size Maximum: 65536 bytes 00:08:02.928 Persistent Memory Region: Not Supported 00:08:02.928 Optional Asynchronous Events Supported 00:08:02.928 Namespace Attribute Notices: Supported 00:08:02.928 Firmware Activation Notices: Not Supported 00:08:02.928 ANA Change Notices: Not Supported 00:08:02.928 PLE Aggregate Log Change Notices: Not Supported 00:08:02.928 LBA Status Info Alert Notices: Not Supported 00:08:02.928 EGE Aggregate Log Change Notices: Not Supported 00:08:02.928 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.928 Zone Descriptor Change Notices: Not Supported 00:08:02.928 Discovery Log Change Notices: Not Supported 00:08:02.928 Controller Attributes 00:08:02.928 128-bit Host Identifier: Not Supported 00:08:02.928 Non-Operational Permissive Mode: Not Supported 00:08:02.928 NVM Sets: Not Supported 00:08:02.928 Read Recovery Levels: Not Supported 00:08:02.928 Endurance Groups: Not Supported 00:08:02.928 Predictable Latency Mode: Not Supported 00:08:02.928 Traffic Based Keep ALive: Not Supported 00:08:02.928 Namespace Granularity: Not Supported 00:08:02.928 SQ Associations: Not Supported 00:08:02.928 UUID List: Not Supported 00:08:02.928 Multi-Domain Subsystem: Not Supported 00:08:02.928 Fixed Capacity Management: Not Supported 00:08:02.928 Variable Capacity Management: Not Supported 00:08:02.928 Delete Endurance Group: Not Supported 00:08:02.928 Delete NVM Set: Not Supported 00:08:02.928 Extended LBA Formats Supported: Supported 00:08:02.928 Flexible Data Placement Supported: Not Supported 00:08:02.928 00:08:02.928 Controller Memory Buffer Support 00:08:02.928 ================================ 00:08:02.928 Supported: No 00:08:02.928 00:08:02.928 Persistent Memory Region Support 00:08:02.928 ================================ 00:08:02.928 Supported: No 00:08:02.928 00:08:02.928 Admin Command Set Attributes 00:08:02.928 ============================ 00:08:02.928 Security Send/Receive: Not Supported 00:08:02.928 Format NVM: Supported 00:08:02.928 Firmware Activate/Download: Not Supported 00:08:02.928 Namespace Management: Supported 00:08:02.928 Device Self-Test: Not Supported 00:08:02.928 Directives: Supported 00:08:02.928 NVMe-MI: Not Supported 00:08:02.928 Virtualization Management: Not Supported 00:08:02.928 Doorbell Buffer Config: Supported 00:08:02.928 Get LBA Status Capability: Not Supported 00:08:02.928 Command & Feature Lockdown Capability: Not Supported 00:08:02.928 Abort Command Limit: 4 00:08:02.928 Async Event Request Limit: 4 00:08:02.928 Number of Firmware Slots: N/A 00:08:02.928 Firmware Slot 1 Read-Only: N/A 00:08:02.928 Firmware Activation Without Reset: N/A 00:08:02.928 Multiple Update Detection Support: N/A 00:08:02.928 Firmware Update Granularity: No Information Provided 00:08:02.928 Per-Namespace SMART Log: Yes 00:08:02.928 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.928 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:02.928 Command Effects Log Page: Supported 00:08:02.928 Get Log Page Extended Data: Supported 00:08:02.928 Telemetry Log Pages: Not Supported 00:08:02.928 Persistent Event Log Pages: Not Supported 00:08:02.928 Supported Log Pages Log Page: May Support 00:08:02.928 Commands Supported & Effects Log Page: Not Supported 00:08:02.928 Feature Identifiers & Effects Log Page:May Support 00:08:02.928 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.928 Data Area 4 for Telemetry Log: Not Supported 00:08:02.928 Error Log Page Entries Supported: 1 00:08:02.928 Keep Alive: Not Supported 00:08:02.928 00:08:02.928 NVM Command Set Attributes 00:08:02.928 ========================== 00:08:02.928 Submission Queue Entry Size 00:08:02.928 Max: 64 00:08:02.928 Min: 64 00:08:02.928 Completion Queue Entry Size 00:08:02.928 Max: 16 00:08:02.928 Min: 16 00:08:02.928 Number of Namespaces: 256 00:08:02.928 Compare Command: Supported 00:08:02.928 Write Uncorrectable Command: Not Supported 00:08:02.928 Dataset Management Command: Supported 00:08:02.928 Write Zeroes Command: Supported 00:08:02.928 Set Features Save Field: Supported 00:08:02.929 Reservations: Not Supported 00:08:02.929 Timestamp: Supported 00:08:02.929 Copy: Supported 00:08:02.929 Volatile Write Cache: Present 00:08:02.929 Atomic Write Unit (Normal): 1 00:08:02.929 Atomic Write Unit (PFail): 1 00:08:02.929 Atomic Compare & Write Unit: 1 00:08:02.929 Fused Compare & Write: Not Supported 00:08:02.929 Scatter-Gather List 00:08:02.929 SGL Command Set: Supported 00:08:02.929 SGL Keyed: Not Supported 00:08:02.929 SGL Bit Bucket Descriptor: Not Supported 00:08:02.929 SGL Metadata Pointer: Not Supported 00:08:02.929 Oversized SGL: Not Supported 00:08:02.929 SGL Metadata Address: Not Supported 00:08:02.929 SGL Offset: Not Supported 00:08:02.929 Transport SGL Data Block: Not Supported 00:08:02.929 Replay Protected Memory Block: Not Supported 00:08:02.929 00:08:02.929 Firmware Slot Information 00:08:02.929 ========================= 00:08:02.929 Active slot: 1 00:08:02.929 Slot 1 Firmware Revision: 1.0 00:08:02.929 00:08:02.929 00:08:02.929 Commands Supported and Effects 00:08:02.929 ============================== 00:08:02.929 Admin Commands 00:08:02.929 -------------- 00:08:02.929 Delete I/O Submission Queue (00h): Supported 00:08:02.929 Create I/O Submission Queue (01h): Supported 00:08:02.929 Get Log Page (02h): Supported 00:08:02.929 Delete I/O Completion Queue (04h): Supported 00:08:02.929 Create I/O Completion Queue (05h): Supported 00:08:02.929 Identify (06h): Supported 00:08:02.929 Abort (08h): Supported 00:08:02.929 Set Features (09h): Supported 00:08:02.929 Get Features (0Ah): Supported 00:08:02.929 Asynchronous Event Request (0Ch): Supported 00:08:02.929 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.929 Directive Send (19h): Supported 00:08:02.929 Directive Receive (1Ah): Supported 00:08:02.929 Virtualization Management (1Ch): Supported 00:08:02.929 Doorbell Buffer Config (7Ch): Supported 00:08:02.929 Format NVM (80h): Supported LBA-Change 00:08:02.929 I/O Commands 00:08:02.929 ------------ 00:08:02.929 Flush (00h): Supported LBA-Change 00:08:02.929 Write (01h): Supported LBA-Change 00:08:02.929 Read (02h): Supported 00:08:02.929 Compare (05h): Supported 00:08:02.929 Write Zeroes (08h): Supported LBA-Change 00:08:02.929 Dataset Management (09h): Supported LBA-Change 00:08:02.929 Unknown (0Ch): Supported 00:08:02.929 Unknown (12h): Supported 00:08:02.929 Copy (19h): Supported LBA-Change 00:08:02.929 Unknown (1Dh): Supported LBA-Change 00:08:02.929 00:08:02.929 Error Log 00:08:02.929 ========= 00:08:02.929 00:08:02.929 Arbitration 00:08:02.929 =========== 00:08:02.929 Arbitration Burst: no limit 00:08:02.929 00:08:02.929 Power Management 00:08:02.929 ================ 00:08:02.929 Number of Power States: 1 00:08:02.929 Current Power State: Power State #0 00:08:02.929 Power State #0: 00:08:02.929 Max Power: 25.00 W 00:08:02.929 Non-Operational State: Operational 00:08:02.929 Entry Latency: 16 microseconds 00:08:02.929 Exit Latency: 4 microseconds 00:08:02.929 Relative Read Throughput: 0 00:08:02.929 Relative Read Latency: 0 00:08:02.929 Relative Write Throughput: 0 00:08:02.929 Relative Write Latency: 0 00:08:02.929 Idle Power: Not Reported 00:08:02.929 Active Power: Not Reported 00:08:02.929 Non-Operational Permissive Mode: Not Supported 00:08:02.929 00:08:02.929 Health Information 00:08:02.929 ================== 00:08:02.929 Critical Warnings: 00:08:02.929 Available Spare Space: OK 00:08:02.929 Temperature: OK 00:08:02.929 Device Reliability: OK 00:08:02.929 Read Only: No 00:08:02.929 Volatile Memory Backup: OK 00:08:02.929 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.929 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.929 Available Spare: 0% 00:08:02.929 Available Spare Threshold: 0% 00:08:02.929 Life Percentage Used: 0% 00:08:02.929 Data Units Read: 1078 00:08:02.929 Data Units Written: 946 00:08:02.929 Host Read Commands: 53971 00:08:02.929 Host Write Commands: 52760 00:08:02.929 Controller Busy Time: 0 minutes 00:08:02.929 Power Cycles: 0 00:08:02.929 Power On Hours: 0 hours 00:08:02.929 Unsafe Shutdowns: 0 00:08:02.929 Unrecoverable Media Errors: 0 00:08:02.929 Lifetime Error Log Entries: 0 00:08:02.929 Warning Temperature Time: 0 minutes 00:08:02.929 Critical Temperature Time: 0 minutes 00:08:02.929 00:08:02.929 Number of Queues 00:08:02.929 ================ 00:08:02.929 Number of I/O Submission Queues: 64 00:08:02.929 Number of I/O Completion Queues: 64 00:08:02.929 00:08:02.929 ZNS Specific Controller Data 00:08:02.929 ============================ 00:08:02.929 Zone Append Size Limit: 0 00:08:02.929 00:08:02.929 00:08:02.929 Active Namespaces 00:08:02.929 ================= 00:08:02.929 Namespace ID:1 00:08:02.929 Error Recovery Timeout: Unlimited 00:08:02.929 Command Set Identifier: NVM (00h) 00:08:02.929 Deallocate: Supported 00:08:02.929 Deallocated/Unwritten Error: Supported 00:08:02.929 Deallocated Read Value: All 0x00 00:08:02.929 Deallocate in Write Zeroes: Not Supported 00:08:02.929 Deallocated Guard Field: 0xFFFF 00:08:02.929 Flush: Supported 00:08:02.929 Reservation: Not Supported 00:08:02.929 Namespace Sharing Capabilities: Private 00:08:02.929 Size (in LBAs): 1310720 (5GiB) 00:08:02.929 Capacity (in LBAs): 1310720 (5GiB) 00:08:02.929 Utilization (in LBAs): 1310720 (5GiB) 00:08:02.929 Thin Provisioning: Not Supported 00:08:02.929 Per-NS Atomic Units: No 00:08:02.929 Maximum Single Source Range Length: 128 00:08:02.929 Maximum Copy Length: 128 00:08:02.929 Maximum Source Range Count: 128 00:08:02.929 NGUID/EUI64 Never Reused: No 00:08:02.929 Namespace Write Protected: No 00:08:02.929 Number of LBA Formats: 8 00:08:02.929 Current LBA Format: LBA Format #04 00:08:02.929 LBA Format #00: Data Size: 512 Metadata Si[2024-11-26 23:41:51.024753] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74421 terminated unexpected 00:08:02.929 ze: 0 00:08:02.929 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.929 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.929 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.929 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.929 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.929 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.929 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.929 00:08:02.929 NVM Specific Namespace Data 00:08:02.929 =========================== 00:08:02.929 Logical Block Storage Tag Mask: 0 00:08:02.929 Protection Information Capabilities: 00:08:02.929 16b Guard Protection Information Storage Tag Support: No 00:08:02.929 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.929 Storage Tag Check Read Support: No 00:08:02.929 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.929 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.929 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.929 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.929 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.929 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.929 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.929 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.929 ===================================================== 00:08:02.929 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:02.929 ===================================================== 00:08:02.929 Controller Capabilities/Features 00:08:02.929 ================================ 00:08:02.929 Vendor ID: 1b36 00:08:02.929 Subsystem Vendor ID: 1af4 00:08:02.929 Serial Number: 12340 00:08:02.929 Model Number: QEMU NVMe Ctrl 00:08:02.929 Firmware Version: 8.0.0 00:08:02.929 Recommended Arb Burst: 6 00:08:02.929 IEEE OUI Identifier: 00 54 52 00:08:02.929 Multi-path I/O 00:08:02.929 May have multiple subsystem ports: No 00:08:02.929 May have multiple controllers: No 00:08:02.929 Associated with SR-IOV VF: No 00:08:02.929 Max Data Transfer Size: 524288 00:08:02.929 Max Number of Namespaces: 256 00:08:02.929 Max Number of I/O Queues: 64 00:08:02.929 NVMe Specification Version (VS): 1.4 00:08:02.929 NVMe Specification Version (Identify): 1.4 00:08:02.929 Maximum Queue Entries: 2048 00:08:02.929 Contiguous Queues Required: Yes 00:08:02.929 Arbitration Mechanisms Supported 00:08:02.929 Weighted Round Robin: Not Supported 00:08:02.929 Vendor Specific: Not Supported 00:08:02.929 Reset Timeout: 7500 ms 00:08:02.929 Doorbell Stride: 4 bytes 00:08:02.929 NVM Subsystem Reset: Not Supported 00:08:02.929 Command Sets Supported 00:08:02.929 NVM Command Set: Supported 00:08:02.929 Boot Partition: Not Supported 00:08:02.929 Memory Page Size Minimum: 4096 bytes 00:08:02.929 Memory Page Size Maximum: 65536 bytes 00:08:02.929 Persistent Memory Region: Not Supported 00:08:02.930 Optional Asynchronous Events Supported 00:08:02.930 Namespace Attribute Notices: Supported 00:08:02.930 Firmware Activation Notices: Not Supported 00:08:02.930 ANA Change Notices: Not Supported 00:08:02.930 PLE Aggregate Log Change Notices: Not Supported 00:08:02.930 LBA Status Info Alert Notices: Not Supported 00:08:02.930 EGE Aggregate Log Change Notices: Not Supported 00:08:02.930 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.930 Zone Descriptor Change Notices: Not Supported 00:08:02.930 Discovery Log Change Notices: Not Supported 00:08:02.930 Controller Attributes 00:08:02.930 128-bit Host Identifier: Not Supported 00:08:02.930 Non-Operational Permissive Mode: Not Supported 00:08:02.930 NVM Sets: Not Supported 00:08:02.930 Read Recovery Levels: Not Supported 00:08:02.930 Endurance Groups: Not Supported 00:08:02.930 Predictable Latency Mode: Not Supported 00:08:02.930 Traffic Based Keep ALive: Not Supported 00:08:02.930 Namespace Granularity: Not Supported 00:08:02.930 SQ Associations: Not Supported 00:08:02.930 UUID List: Not Supported 00:08:02.930 Multi-Domain Subsystem: Not Supported 00:08:02.930 Fixed Capacity Management: Not Supported 00:08:02.930 Variable Capacity Management: Not Supported 00:08:02.930 Delete Endurance Group: Not Supported 00:08:02.930 Delete NVM Set: Not Supported 00:08:02.930 Extended LBA Formats Supported: Supported 00:08:02.930 Flexible Data Placement Supported: Not Supported 00:08:02.930 00:08:02.930 Controller Memory Buffer Support 00:08:02.930 ================================ 00:08:02.930 Supported: No 00:08:02.930 00:08:02.930 Persistent Memory Region Support 00:08:02.930 ================================ 00:08:02.930 Supported: No 00:08:02.930 00:08:02.930 Admin Command Set Attributes 00:08:02.930 ============================ 00:08:02.930 Security Send/Receive: Not Supported 00:08:02.930 Format NVM: Supported 00:08:02.930 Firmware Activate/Download: Not Supported 00:08:02.930 Namespace Management: Supported 00:08:02.930 Device Self-Test: Not Supported 00:08:02.930 Directives: Supported 00:08:02.930 NVMe-MI: Not Supported 00:08:02.930 Virtualization Management: Not Supported 00:08:02.930 Doorbell Buffer Config: Supported 00:08:02.930 Get LBA Status Capability: Not Supported 00:08:02.930 Command & Feature Lockdown Capability: Not Supported 00:08:02.930 Abort Command Limit: 4 00:08:02.930 Async Event Request Limit: 4 00:08:02.930 Number of Firmware Slots: N/A 00:08:02.930 Firmware Slot 1 Read-Only: N/A 00:08:02.930 Firmware Activation Without Reset: N/A 00:08:02.930 Multiple Update Detection Support: N/A 00:08:02.930 Firmware Update Granularity: No Information Provided 00:08:02.930 Per-Namespace SMART Log: Yes 00:08:02.930 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.930 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:02.930 Command Effects Log Page: Supported 00:08:02.930 Get Log Page Extended Data: Supported 00:08:02.930 Telemetry Log Pages: Not Supported 00:08:02.930 Persistent Event Log Pages: Not Supported 00:08:02.930 Supported Log Pages Log Page: May Support 00:08:02.930 Commands Supported & Effects Log Page: Not Supported 00:08:02.930 Feature Identifiers & Effects Log Page:May Support 00:08:02.930 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.930 Data Area 4 for Telemetry Log: Not Supported 00:08:02.930 Error Log Page Entries Supported: 1 00:08:02.930 Keep Alive: Not Supported 00:08:02.930 00:08:02.930 NVM Command Set Attributes 00:08:02.930 ========================== 00:08:02.930 Submission Queue Entry Size 00:08:02.930 Max: 64 00:08:02.930 Min: 64 00:08:02.930 Completion Queue Entry Size 00:08:02.930 Max: 16 00:08:02.930 Min: 16 00:08:02.930 Number of Namespaces: 256 00:08:02.930 Compare Command: Supported 00:08:02.930 Write Uncorrectable Command: Not Supported 00:08:02.930 Dataset Management Command: Supported 00:08:02.930 Write Zeroes Command: Supported 00:08:02.930 Set Features Save Field: Supported 00:08:02.930 Reservations: Not Supported 00:08:02.930 Timestamp: Supported 00:08:02.930 Copy: Supported 00:08:02.930 Volatile Write Cache: Present 00:08:02.930 Atomic Write Unit (Normal): 1 00:08:02.930 Atomic Write Unit (PFail): 1 00:08:02.930 Atomic Compare & Write Unit: 1 00:08:02.930 Fused Compare & Write: Not Supported 00:08:02.930 Scatter-Gather List 00:08:02.930 SGL Command Set: Supported 00:08:02.930 SGL Keyed: Not Supported 00:08:02.930 SGL Bit Bucket Descriptor: Not Supported 00:08:02.930 SGL Metadata Pointer: Not Supported 00:08:02.930 Oversized SGL: Not Supported 00:08:02.930 SGL Metadata Address: Not Supported 00:08:02.930 SGL Offset: Not Supported 00:08:02.930 Transport SGL Data Block: Not Supported 00:08:02.930 Replay Protected Memory Block: Not Supported 00:08:02.930 00:08:02.930 Firmware Slot Information 00:08:02.930 ========================= 00:08:02.930 Active slot: 1 00:08:02.930 Slot 1 Firmware Revision: 1.0 00:08:02.930 00:08:02.930 00:08:02.930 Commands Supported and Effects 00:08:02.930 ============================== 00:08:02.930 Admin Commands 00:08:02.930 -------------- 00:08:02.930 Delete I/O Submission Queue (00h): Supported 00:08:02.930 Create I/O Submission Queue (01h): Supported 00:08:02.930 Get Log Page (02h): Supported 00:08:02.930 Delete I/O Completion Queue (04h): Supported 00:08:02.930 Create I/O Completion Queue (05h): Supported 00:08:02.930 Identify (06h): Supported 00:08:02.930 Abort (08h): Supported 00:08:02.930 Set Features (09h): Supported 00:08:02.930 Get Features (0Ah): Supported 00:08:02.930 Asynchronous Event Request (0Ch): Supported 00:08:02.930 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.930 Directive Send (19h): Supported 00:08:02.930 Directive Receive (1Ah): Supported 00:08:02.930 Virtualization Management (1Ch): Supported 00:08:02.930 Doorbell Buffer Config (7Ch): Supported 00:08:02.930 Format NVM (80h): Supported LBA-Change 00:08:02.930 I/O Commands 00:08:02.930 ------------ 00:08:02.930 Flush (00h): Supported LBA-Change 00:08:02.930 Write (01h): Supported LBA-Change 00:08:02.930 Read (02h): Supported 00:08:02.930 Compare (05h): Supported 00:08:02.930 Write Zeroes (08h): Supported LBA-Change 00:08:02.930 Dataset Management (09h): Supported LBA-Change 00:08:02.930 Unknown (0Ch): Supported 00:08:02.930 Unknown (12h): Supported 00:08:02.930 Copy (19h): Supported LBA-Change 00:08:02.930 Unknown (1Dh): Supported LBA-Change 00:08:02.930 00:08:02.930 Error Log 00:08:02.930 ========= 00:08:02.930 00:08:02.930 Arbitration 00:08:02.930 =========== 00:08:02.930 Arbitration Burst: no limit 00:08:02.930 00:08:02.930 Power Management 00:08:02.930 ================ 00:08:02.930 Number of Power States: 1 00:08:02.930 Current Power State: Power State #0 00:08:02.930 Power State #0: 00:08:02.930 Max Power: 25.00 W 00:08:02.930 Non-Operational State: Operational 00:08:02.930 Entry Latency: 16 microseconds 00:08:02.930 Exit Latency: 4 microseconds 00:08:02.930 Relative Read Throughput: 0 00:08:02.930 Relative Read Latency: 0 00:08:02.930 Relative Write Throughput: 0 00:08:02.930 Relative Write Latency: 0 00:08:02.930 Idle Power: Not Reported 00:08:02.930 Active Power: Not Reported 00:08:02.930 Non-Operational Permissive Mode: Not Supported 00:08:02.930 00:08:02.930 Health Information 00:08:02.930 ================== 00:08:02.930 Critical Warnings: 00:08:02.930 Available Spare Space: OK 00:08:02.930 Temperature: OK 00:08:02.930 Device Reliability: OK 00:08:02.930 Read Only: No 00:08:02.930 Volatile Memory Backup: OK 00:08:02.930 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.930 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.930 Available Spare: 0% 00:08:02.930 Available Spare Threshold: 0% 00:08:02.930 Life Percentage Used: 0% 00:08:02.930 Data Units Read: 736 00:08:02.930 Data Units Written: 664 00:08:02.930 Host Read Commands: 38528 00:08:02.930 Host Write Commands: 38314 00:08:02.930 Controller Busy Time: 0 minutes 00:08:02.930 Power Cycles: 0 00:08:02.930 Power On Hours: 0 hours 00:08:02.930 Unsafe Shutdowns: 0 00:08:02.930 Unrecoverable Media Errors: 0 00:08:02.930 Lifetime Error Log Entries: 0 00:08:02.930 Warning Temperature Time: 0 minutes 00:08:02.930 Critical Temperature Time: 0 minutes 00:08:02.930 00:08:02.930 Number of Queues 00:08:02.930 ================ 00:08:02.930 Number of I/O Submission Queues: 64 00:08:02.930 Number of I/O Completion Queues: 64 00:08:02.930 00:08:02.930 ZNS Specific Controller Data 00:08:02.930 ============================ 00:08:02.930 Zone Append Size Limit: 0 00:08:02.930 00:08:02.930 00:08:02.930 Active Namespaces 00:08:02.930 ================= 00:08:02.930 Namespace ID:1 00:08:02.930 Error Recovery Timeout: Unlimited 00:08:02.930 Command Set Identifier: NVM (00h) 00:08:02.930 Deallocate: Supported 00:08:02.930 Deallocated/Unwritten Error: Supported 00:08:02.931 Deallocated Read Value: All 0x00 00:08:02.931 Deallocate in Write Zeroes: Not Supported 00:08:02.931 Deallocated Guard Field: 0xFFFF 00:08:02.931 Flush: Supported 00:08:02.931 Reservation: Not Supported 00:08:02.931 Metadata Transferred as: Separate Metadata Buffer 00:08:02.931 Namespace Sharing Capabilities: Private 00:08:02.931 Size (in LBAs): 1548666 (5GiB) 00:08:02.931 Capacity (in LBAs): 1548666 (5GiB) 00:08:02.931 Utilization (in LBAs): 1548666 (5GiB) 00:08:02.931 Thin Provisioning: Not Supported 00:08:02.931 Per-NS Atomic Units: No 00:08:02.931 Maximum Single Source Range Length: 128 00:08:02.931 Maximum Copy Length: 128 00:08:02.931 Maximum Source Range Count: 128 00:08:02.931 NGUID/EUI64 Never Reused: No 00:08:02.931 Namespace Write Protected: No 00:08:02.931 Number of LBA Formats: 8 00:08:02.931 Current LBA Format: LBA Format #07 00:08:02.931 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.931 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.931 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.931 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.931 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.931 LBA Forma[2024-11-26 23:41:51.027213] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74421 terminated unexpected 00:08:02.931 t #05: Data Size: 4096 Metadata Size: 8 00:08:02.931 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.931 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.931 00:08:02.931 NVM Specific Namespace Data 00:08:02.931 =========================== 00:08:02.931 Logical Block Storage Tag Mask: 0 00:08:02.931 Protection Information Capabilities: 00:08:02.931 16b Guard Protection Information Storage Tag Support: No 00:08:02.931 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.931 Storage Tag Check Read Support: No 00:08:02.931 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.931 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.931 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.931 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.931 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.931 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.931 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.931 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.931 ===================================================== 00:08:02.931 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:02.931 ===================================================== 00:08:02.931 Controller Capabilities/Features 00:08:02.931 ================================ 00:08:02.931 Vendor ID: 1b36 00:08:02.931 Subsystem Vendor ID: 1af4 00:08:02.931 Serial Number: 12342 00:08:02.931 Model Number: QEMU NVMe Ctrl 00:08:02.931 Firmware Version: 8.0.0 00:08:02.931 Recommended Arb Burst: 6 00:08:02.931 IEEE OUI Identifier: 00 54 52 00:08:02.931 Multi-path I/O 00:08:02.931 May have multiple subsystem ports: No 00:08:02.931 May have multiple controllers: No 00:08:02.931 Associated with SR-IOV VF: No 00:08:02.931 Max Data Transfer Size: 524288 00:08:02.931 Max Number of Namespaces: 256 00:08:02.931 Max Number of I/O Queues: 64 00:08:02.931 NVMe Specification Version (VS): 1.4 00:08:02.931 NVMe Specification Version (Identify): 1.4 00:08:02.931 Maximum Queue Entries: 2048 00:08:02.931 Contiguous Queues Required: Yes 00:08:02.931 Arbitration Mechanisms Supported 00:08:02.931 Weighted Round Robin: Not Supported 00:08:02.931 Vendor Specific: Not Supported 00:08:02.931 Reset Timeout: 7500 ms 00:08:02.931 Doorbell Stride: 4 bytes 00:08:02.931 NVM Subsystem Reset: Not Supported 00:08:02.931 Command Sets Supported 00:08:02.931 NVM Command Set: Supported 00:08:02.931 Boot Partition: Not Supported 00:08:02.931 Memory Page Size Minimum: 4096 bytes 00:08:02.931 Memory Page Size Maximum: 65536 bytes 00:08:02.931 Persistent Memory Region: Not Supported 00:08:02.931 Optional Asynchronous Events Supported 00:08:02.931 Namespace Attribute Notices: Supported 00:08:02.931 Firmware Activation Notices: Not Supported 00:08:02.931 ANA Change Notices: Not Supported 00:08:02.931 PLE Aggregate Log Change Notices: Not Supported 00:08:02.931 LBA Status Info Alert Notices: Not Supported 00:08:02.931 EGE Aggregate Log Change Notices: Not Supported 00:08:02.931 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.931 Zone Descriptor Change Notices: Not Supported 00:08:02.931 Discovery Log Change Notices: Not Supported 00:08:02.931 Controller Attributes 00:08:02.931 128-bit Host Identifier: Not Supported 00:08:02.931 Non-Operational Permissive Mode: Not Supported 00:08:02.931 NVM Sets: Not Supported 00:08:02.931 Read Recovery Levels: Not Supported 00:08:02.931 Endurance Groups: Not Supported 00:08:02.931 Predictable Latency Mode: Not Supported 00:08:02.931 Traffic Based Keep ALive: Not Supported 00:08:02.931 Namespace Granularity: Not Supported 00:08:02.931 SQ Associations: Not Supported 00:08:02.931 UUID List: Not Supported 00:08:02.931 Multi-Domain Subsystem: Not Supported 00:08:02.931 Fixed Capacity Management: Not Supported 00:08:02.931 Variable Capacity Management: Not Supported 00:08:02.931 Delete Endurance Group: Not Supported 00:08:02.931 Delete NVM Set: Not Supported 00:08:02.931 Extended LBA Formats Supported: Supported 00:08:02.931 Flexible Data Placement Supported: Not Supported 00:08:02.931 00:08:02.931 Controller Memory Buffer Support 00:08:02.931 ================================ 00:08:02.931 Supported: No 00:08:02.931 00:08:02.931 Persistent Memory Region Support 00:08:02.931 ================================ 00:08:02.931 Supported: No 00:08:02.931 00:08:02.931 Admin Command Set Attributes 00:08:02.931 ============================ 00:08:02.931 Security Send/Receive: Not Supported 00:08:02.931 Format NVM: Supported 00:08:02.931 Firmware Activate/Download: Not Supported 00:08:02.931 Namespace Management: Supported 00:08:02.931 Device Self-Test: Not Supported 00:08:02.931 Directives: Supported 00:08:02.931 NVMe-MI: Not Supported 00:08:02.931 Virtualization Management: Not Supported 00:08:02.931 Doorbell Buffer Config: Supported 00:08:02.931 Get LBA Status Capability: Not Supported 00:08:02.931 Command & Feature Lockdown Capability: Not Supported 00:08:02.931 Abort Command Limit: 4 00:08:02.931 Async Event Request Limit: 4 00:08:02.931 Number of Firmware Slots: N/A 00:08:02.931 Firmware Slot 1 Read-Only: N/A 00:08:02.931 Firmware Activation Without Reset: N/A 00:08:02.931 Multiple Update Detection Support: N/A 00:08:02.931 Firmware Update Granularity: No Information Provided 00:08:02.931 Per-Namespace SMART Log: Yes 00:08:02.931 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.931 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:02.931 Command Effects Log Page: Supported 00:08:02.931 Get Log Page Extended Data: Supported 00:08:02.931 Telemetry Log Pages: Not Supported 00:08:02.931 Persistent Event Log Pages: Not Supported 00:08:02.931 Supported Log Pages Log Page: May Support 00:08:02.931 Commands Supported & Effects Log Page: Not Supported 00:08:02.931 Feature Identifiers & Effects Log Page:May Support 00:08:02.931 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.931 Data Area 4 for Telemetry Log: Not Supported 00:08:02.931 Error Log Page Entries Supported: 1 00:08:02.931 Keep Alive: Not Supported 00:08:02.931 00:08:02.931 NVM Command Set Attributes 00:08:02.931 ========================== 00:08:02.931 Submission Queue Entry Size 00:08:02.931 Max: 64 00:08:02.931 Min: 64 00:08:02.931 Completion Queue Entry Size 00:08:02.931 Max: 16 00:08:02.931 Min: 16 00:08:02.932 Number of Namespaces: 256 00:08:02.932 Compare Command: Supported 00:08:02.932 Write Uncorrectable Command: Not Supported 00:08:02.932 Dataset Management Command: Supported 00:08:02.932 Write Zeroes Command: Supported 00:08:02.932 Set Features Save Field: Supported 00:08:02.932 Reservations: Not Supported 00:08:02.932 Timestamp: Supported 00:08:02.932 Copy: Supported 00:08:02.932 Volatile Write Cache: Present 00:08:02.932 Atomic Write Unit (Normal): 1 00:08:02.932 Atomic Write Unit (PFail): 1 00:08:02.932 Atomic Compare & Write Unit: 1 00:08:02.932 Fused Compare & Write: Not Supported 00:08:02.932 Scatter-Gather List 00:08:02.932 SGL Command Set: Supported 00:08:02.932 SGL Keyed: Not Supported 00:08:02.932 SGL Bit Bucket Descriptor: Not Supported 00:08:02.932 SGL Metadata Pointer: Not Supported 00:08:02.932 Oversized SGL: Not Supported 00:08:02.932 SGL Metadata Address: Not Supported 00:08:02.932 SGL Offset: Not Supported 00:08:02.932 Transport SGL Data Block: Not Supported 00:08:02.932 Replay Protected Memory Block: Not Supported 00:08:02.932 00:08:02.932 Firmware Slot Information 00:08:02.932 ========================= 00:08:02.932 Active slot: 1 00:08:02.932 Slot 1 Firmware Revision: 1.0 00:08:02.932 00:08:02.932 00:08:02.932 Commands Supported and Effects 00:08:02.932 ============================== 00:08:02.932 Admin Commands 00:08:02.932 -------------- 00:08:02.932 Delete I/O Submission Queue (00h): Supported 00:08:02.932 Create I/O Submission Queue (01h): Supported 00:08:02.932 Get Log Page (02h): Supported 00:08:02.932 Delete I/O Completion Queue (04h): Supported 00:08:02.932 Create I/O Completion Queue (05h): Supported 00:08:02.932 Identify (06h): Supported 00:08:02.932 Abort (08h): Supported 00:08:02.932 Set Features (09h): Supported 00:08:02.932 Get Features (0Ah): Supported 00:08:02.932 Asynchronous Event Request (0Ch): Supported 00:08:02.932 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.932 Directive Send (19h): Supported 00:08:02.932 Directive Receive (1Ah): Supported 00:08:02.932 Virtualization Management (1Ch): Supported 00:08:02.932 Doorbell Buffer Config (7Ch): Supported 00:08:02.932 Format NVM (80h): Supported LBA-Change 00:08:02.932 I/O Commands 00:08:02.932 ------------ 00:08:02.932 Flush (00h): Supported LBA-Change 00:08:02.932 Write (01h): Supported LBA-Change 00:08:02.932 Read (02h): Supported 00:08:02.932 Compare (05h): Supported 00:08:02.932 Write Zeroes (08h): Supported LBA-Change 00:08:02.932 Dataset Management (09h): Supported LBA-Change 00:08:02.932 Unknown (0Ch): Supported 00:08:02.932 Unknown (12h): Supported 00:08:02.932 Copy (19h): Supported LBA-Change 00:08:02.932 Unknown (1Dh): Supported LBA-Change 00:08:02.932 00:08:02.932 Error Log 00:08:02.932 ========= 00:08:02.932 00:08:02.932 Arbitration 00:08:02.932 =========== 00:08:02.932 Arbitration Burst: no limit 00:08:02.932 00:08:02.932 Power Management 00:08:02.932 ================ 00:08:02.932 Number of Power States: 1 00:08:02.932 Current Power State: Power State #0 00:08:02.932 Power State #0: 00:08:02.932 Max Power: 25.00 W 00:08:02.932 Non-Operational State: Operational 00:08:02.932 Entry Latency: 16 microseconds 00:08:02.932 Exit Latency: 4 microseconds 00:08:02.932 Relative Read Throughput: 0 00:08:02.932 Relative Read Latency: 0 00:08:02.932 Relative Write Throughput: 0 00:08:02.932 Relative Write Latency: 0 00:08:02.932 Idle Power: Not Reported 00:08:02.932 Active Power: Not Reported 00:08:02.932 Non-Operational Permissive Mode: Not Supported 00:08:02.932 00:08:02.932 Health Information 00:08:02.932 ================== 00:08:02.932 Critical Warnings: 00:08:02.932 Available Spare Space: OK 00:08:02.932 Temperature: OK 00:08:02.932 Device Reliability: OK 00:08:02.932 Read Only: No 00:08:02.932 Volatile Memory Backup: OK 00:08:02.932 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.932 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.932 Available Spare: 0% 00:08:02.932 Available Spare Threshold: 0% 00:08:02.932 Life Percentage Used: 0% 00:08:02.932 Data Units Read: 2383 00:08:02.932 Data Units Written: 2170 00:08:02.932 Host Read Commands: 117674 00:08:02.932 Host Write Commands: 115944 00:08:02.932 Controller Busy Time: 0 minutes 00:08:02.932 Power Cycles: 0 00:08:02.932 Power On Hours: 0 hours 00:08:02.932 Unsafe Shutdowns: 0 00:08:02.932 Unrecoverable Media Errors: 0 00:08:02.932 Lifetime Error Log Entries: 0 00:08:02.932 Warning Temperature Time: 0 minutes 00:08:02.932 Critical Temperature Time: 0 minutes 00:08:02.932 00:08:02.932 Number of Queues 00:08:02.932 ================ 00:08:02.932 Number of I/O Submission Queues: 64 00:08:02.932 Number of I/O Completion Queues: 64 00:08:02.932 00:08:02.932 ZNS Specific Controller Data 00:08:02.932 ============================ 00:08:02.932 Zone Append Size Limit: 0 00:08:02.932 00:08:02.932 00:08:02.932 Active Namespaces 00:08:02.932 ================= 00:08:02.932 Namespace ID:1 00:08:02.932 Error Recovery Timeout: Unlimited 00:08:02.932 Command Set Identifier: NVM (00h) 00:08:02.932 Deallocate: Supported 00:08:02.932 Deallocated/Unwritten Error: Supported 00:08:02.932 Deallocated Read Value: All 0x00 00:08:02.932 Deallocate in Write Zeroes: Not Supported 00:08:02.932 Deallocated Guard Field: 0xFFFF 00:08:02.932 Flush: Supported 00:08:02.932 Reservation: Not Supported 00:08:02.932 Namespace Sharing Capabilities: Private 00:08:02.932 Size (in LBAs): 1048576 (4GiB) 00:08:02.932 Capacity (in LBAs): 1048576 (4GiB) 00:08:02.932 Utilization (in LBAs): 1048576 (4GiB) 00:08:02.932 Thin Provisioning: Not Supported 00:08:02.932 Per-NS Atomic Units: No 00:08:02.932 Maximum Single Source Range Length: 128 00:08:02.932 Maximum Copy Length: 128 00:08:02.932 Maximum Source Range Count: 128 00:08:02.932 NGUID/EUI64 Never Reused: No 00:08:02.932 Namespace Write Protected: No 00:08:02.932 Number of LBA Formats: 8 00:08:02.932 Current LBA Format: LBA Format #04 00:08:02.932 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.932 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.932 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.932 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.932 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.932 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.932 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.932 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.932 00:08:02.932 NVM Specific Namespace Data 00:08:02.932 =========================== 00:08:02.932 Logical Block Storage Tag Mask: 0 00:08:02.932 Protection Information Capabilities: 00:08:02.932 16b Guard Protection Information Storage Tag Support: No 00:08:02.932 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.932 Storage Tag Check Read Support: No 00:08:02.932 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.932 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.932 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.932 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.932 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.932 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.932 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.932 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.932 Namespace ID:2 00:08:02.932 Error Recovery Timeout: Unlimited 00:08:02.932 Command Set Identifier: NVM (00h) 00:08:02.932 Deallocate: Supported 00:08:02.932 Deallocated/Unwritten Error: Supported 00:08:02.932 Deallocated Read Value: All 0x00 00:08:02.932 Deallocate in Write Zeroes: Not Supported 00:08:02.932 Deallocated Guard Field: 0xFFFF 00:08:02.932 Flush: Supported 00:08:02.932 Reservation: Not Supported 00:08:02.932 Namespace Sharing Capabilities: Private 00:08:02.932 Size (in LBAs): 1048576 (4GiB) 00:08:02.932 Capacity (in LBAs): 1048576 (4GiB) 00:08:02.932 Utilization (in LBAs): 1048576 (4GiB) 00:08:02.932 Thin Provisioning: Not Supported 00:08:02.932 Per-NS Atomic Units: No 00:08:02.932 Maximum Single Source Range Length: 128 00:08:02.932 Maximum Copy Length: 128 00:08:02.932 Maximum Source Range Count: 128 00:08:02.932 NGUID/EUI64 Never Reused: No 00:08:02.932 Namespace Write Protected: No 00:08:02.932 Number of LBA Formats: 8 00:08:02.932 Current LBA Format: LBA Format #04 00:08:02.932 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.932 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.932 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.932 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.932 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.932 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.932 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.933 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.933 00:08:02.933 NVM Specific Namespace Data 00:08:02.933 =========================== 00:08:02.933 Logical Block Storage Tag Mask: 0 00:08:02.933 Protection Information Capabilities: 00:08:02.933 16b Guard Protection Information Storage Tag Support: No 00:08:02.933 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.933 Storage Tag Check Read Support: No 00:08:02.933 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.933 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.933 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.933 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.933 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.933 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.933 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.933 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.933 Namespace ID:3 00:08:02.933 Error Recovery Timeout: Unlimited 00:08:02.933 Command Set Identifier: NVM (00h) 00:08:02.933 Deallocate: Supported 00:08:02.933 Deallocated/Unwritten Error: Supported 00:08:02.933 Deallocated Read Value: All 0x00 00:08:02.933 Deallocate in Write Zeroes: Not Supported 00:08:02.933 Deallocated Guard Field: 0xFFFF 00:08:02.933 Flush: Supported 00:08:02.933 Reservation: Not Supported 00:08:02.933 Namespace Sharing Capabilities: Private 00:08:02.933 Size (in LBAs): 1048576 (4GiB) 00:08:03.193 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.193 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.193 Thin Provisioning: Not Supported 00:08:03.193 Per-NS Atomic Units: No 00:08:03.193 Maximum Single Source Range Length: 128 00:08:03.193 Maximum Copy Length: 128 00:08:03.193 Maximum Source Range Count: 128 00:08:03.193 NGUID/EUI64 Never Reused: No 00:08:03.193 Namespace Write Protected: No 00:08:03.193 Number of LBA Formats: 8 00:08:03.193 Current LBA Format: LBA Format #04 00:08:03.193 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.193 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.193 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.193 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.193 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.193 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.193 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.193 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.193 00:08:03.193 NVM Specific Namespace Data 00:08:03.193 =========================== 00:08:03.193 Logical Block Storage Tag Mask: 0 00:08:03.193 Protection Information Capabilities: 00:08:03.193 16b Guard Protection Information Storage Tag Support: No 00:08:03.193 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.193 Storage Tag Check Read Support: No 00:08:03.193 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.193 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.193 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.193 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.193 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.193 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.193 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.193 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.193 23:41:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:03.193 23:41:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:03.193 ===================================================== 00:08:03.193 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.193 ===================================================== 00:08:03.194 Controller Capabilities/Features 00:08:03.194 ================================ 00:08:03.194 Vendor ID: 1b36 00:08:03.194 Subsystem Vendor ID: 1af4 00:08:03.194 Serial Number: 12340 00:08:03.194 Model Number: QEMU NVMe Ctrl 00:08:03.194 Firmware Version: 8.0.0 00:08:03.194 Recommended Arb Burst: 6 00:08:03.194 IEEE OUI Identifier: 00 54 52 00:08:03.194 Multi-path I/O 00:08:03.194 May have multiple subsystem ports: No 00:08:03.194 May have multiple controllers: No 00:08:03.194 Associated with SR-IOV VF: No 00:08:03.194 Max Data Transfer Size: 524288 00:08:03.194 Max Number of Namespaces: 256 00:08:03.194 Max Number of I/O Queues: 64 00:08:03.194 NVMe Specification Version (VS): 1.4 00:08:03.194 NVMe Specification Version (Identify): 1.4 00:08:03.194 Maximum Queue Entries: 2048 00:08:03.194 Contiguous Queues Required: Yes 00:08:03.194 Arbitration Mechanisms Supported 00:08:03.194 Weighted Round Robin: Not Supported 00:08:03.194 Vendor Specific: Not Supported 00:08:03.194 Reset Timeout: 7500 ms 00:08:03.194 Doorbell Stride: 4 bytes 00:08:03.194 NVM Subsystem Reset: Not Supported 00:08:03.194 Command Sets Supported 00:08:03.194 NVM Command Set: Supported 00:08:03.194 Boot Partition: Not Supported 00:08:03.194 Memory Page Size Minimum: 4096 bytes 00:08:03.194 Memory Page Size Maximum: 65536 bytes 00:08:03.194 Persistent Memory Region: Not Supported 00:08:03.194 Optional Asynchronous Events Supported 00:08:03.194 Namespace Attribute Notices: Supported 00:08:03.194 Firmware Activation Notices: Not Supported 00:08:03.194 ANA Change Notices: Not Supported 00:08:03.194 PLE Aggregate Log Change Notices: Not Supported 00:08:03.194 LBA Status Info Alert Notices: Not Supported 00:08:03.194 EGE Aggregate Log Change Notices: Not Supported 00:08:03.194 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.194 Zone Descriptor Change Notices: Not Supported 00:08:03.194 Discovery Log Change Notices: Not Supported 00:08:03.194 Controller Attributes 00:08:03.194 128-bit Host Identifier: Not Supported 00:08:03.194 Non-Operational Permissive Mode: Not Supported 00:08:03.194 NVM Sets: Not Supported 00:08:03.194 Read Recovery Levels: Not Supported 00:08:03.194 Endurance Groups: Not Supported 00:08:03.194 Predictable Latency Mode: Not Supported 00:08:03.194 Traffic Based Keep ALive: Not Supported 00:08:03.194 Namespace Granularity: Not Supported 00:08:03.194 SQ Associations: Not Supported 00:08:03.194 UUID List: Not Supported 00:08:03.194 Multi-Domain Subsystem: Not Supported 00:08:03.194 Fixed Capacity Management: Not Supported 00:08:03.194 Variable Capacity Management: Not Supported 00:08:03.194 Delete Endurance Group: Not Supported 00:08:03.194 Delete NVM Set: Not Supported 00:08:03.194 Extended LBA Formats Supported: Supported 00:08:03.194 Flexible Data Placement Supported: Not Supported 00:08:03.194 00:08:03.194 Controller Memory Buffer Support 00:08:03.194 ================================ 00:08:03.194 Supported: No 00:08:03.194 00:08:03.194 Persistent Memory Region Support 00:08:03.194 ================================ 00:08:03.194 Supported: No 00:08:03.194 00:08:03.194 Admin Command Set Attributes 00:08:03.194 ============================ 00:08:03.194 Security Send/Receive: Not Supported 00:08:03.194 Format NVM: Supported 00:08:03.194 Firmware Activate/Download: Not Supported 00:08:03.194 Namespace Management: Supported 00:08:03.194 Device Self-Test: Not Supported 00:08:03.194 Directives: Supported 00:08:03.194 NVMe-MI: Not Supported 00:08:03.194 Virtualization Management: Not Supported 00:08:03.194 Doorbell Buffer Config: Supported 00:08:03.194 Get LBA Status Capability: Not Supported 00:08:03.194 Command & Feature Lockdown Capability: Not Supported 00:08:03.194 Abort Command Limit: 4 00:08:03.194 Async Event Request Limit: 4 00:08:03.194 Number of Firmware Slots: N/A 00:08:03.194 Firmware Slot 1 Read-Only: N/A 00:08:03.194 Firmware Activation Without Reset: N/A 00:08:03.194 Multiple Update Detection Support: N/A 00:08:03.194 Firmware Update Granularity: No Information Provided 00:08:03.194 Per-Namespace SMART Log: Yes 00:08:03.194 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.194 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:03.194 Command Effects Log Page: Supported 00:08:03.194 Get Log Page Extended Data: Supported 00:08:03.194 Telemetry Log Pages: Not Supported 00:08:03.194 Persistent Event Log Pages: Not Supported 00:08:03.194 Supported Log Pages Log Page: May Support 00:08:03.194 Commands Supported & Effects Log Page: Not Supported 00:08:03.194 Feature Identifiers & Effects Log Page:May Support 00:08:03.194 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.194 Data Area 4 for Telemetry Log: Not Supported 00:08:03.194 Error Log Page Entries Supported: 1 00:08:03.194 Keep Alive: Not Supported 00:08:03.194 00:08:03.194 NVM Command Set Attributes 00:08:03.194 ========================== 00:08:03.194 Submission Queue Entry Size 00:08:03.194 Max: 64 00:08:03.194 Min: 64 00:08:03.194 Completion Queue Entry Size 00:08:03.194 Max: 16 00:08:03.194 Min: 16 00:08:03.194 Number of Namespaces: 256 00:08:03.194 Compare Command: Supported 00:08:03.194 Write Uncorrectable Command: Not Supported 00:08:03.194 Dataset Management Command: Supported 00:08:03.194 Write Zeroes Command: Supported 00:08:03.194 Set Features Save Field: Supported 00:08:03.194 Reservations: Not Supported 00:08:03.194 Timestamp: Supported 00:08:03.194 Copy: Supported 00:08:03.194 Volatile Write Cache: Present 00:08:03.194 Atomic Write Unit (Normal): 1 00:08:03.194 Atomic Write Unit (PFail): 1 00:08:03.194 Atomic Compare & Write Unit: 1 00:08:03.194 Fused Compare & Write: Not Supported 00:08:03.194 Scatter-Gather List 00:08:03.194 SGL Command Set: Supported 00:08:03.194 SGL Keyed: Not Supported 00:08:03.194 SGL Bit Bucket Descriptor: Not Supported 00:08:03.194 SGL Metadata Pointer: Not Supported 00:08:03.194 Oversized SGL: Not Supported 00:08:03.194 SGL Metadata Address: Not Supported 00:08:03.194 SGL Offset: Not Supported 00:08:03.194 Transport SGL Data Block: Not Supported 00:08:03.194 Replay Protected Memory Block: Not Supported 00:08:03.194 00:08:03.194 Firmware Slot Information 00:08:03.194 ========================= 00:08:03.194 Active slot: 1 00:08:03.194 Slot 1 Firmware Revision: 1.0 00:08:03.194 00:08:03.194 00:08:03.194 Commands Supported and Effects 00:08:03.194 ============================== 00:08:03.194 Admin Commands 00:08:03.194 -------------- 00:08:03.194 Delete I/O Submission Queue (00h): Supported 00:08:03.194 Create I/O Submission Queue (01h): Supported 00:08:03.194 Get Log Page (02h): Supported 00:08:03.194 Delete I/O Completion Queue (04h): Supported 00:08:03.194 Create I/O Completion Queue (05h): Supported 00:08:03.194 Identify (06h): Supported 00:08:03.194 Abort (08h): Supported 00:08:03.194 Set Features (09h): Supported 00:08:03.194 Get Features (0Ah): Supported 00:08:03.194 Asynchronous Event Request (0Ch): Supported 00:08:03.194 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.194 Directive Send (19h): Supported 00:08:03.194 Directive Receive (1Ah): Supported 00:08:03.194 Virtualization Management (1Ch): Supported 00:08:03.194 Doorbell Buffer Config (7Ch): Supported 00:08:03.194 Format NVM (80h): Supported LBA-Change 00:08:03.194 I/O Commands 00:08:03.194 ------------ 00:08:03.194 Flush (00h): Supported LBA-Change 00:08:03.194 Write (01h): Supported LBA-Change 00:08:03.194 Read (02h): Supported 00:08:03.194 Compare (05h): Supported 00:08:03.194 Write Zeroes (08h): Supported LBA-Change 00:08:03.194 Dataset Management (09h): Supported LBA-Change 00:08:03.194 Unknown (0Ch): Supported 00:08:03.194 Unknown (12h): Supported 00:08:03.194 Copy (19h): Supported LBA-Change 00:08:03.194 Unknown (1Dh): Supported LBA-Change 00:08:03.194 00:08:03.194 Error Log 00:08:03.194 ========= 00:08:03.194 00:08:03.194 Arbitration 00:08:03.194 =========== 00:08:03.194 Arbitration Burst: no limit 00:08:03.194 00:08:03.194 Power Management 00:08:03.194 ================ 00:08:03.194 Number of Power States: 1 00:08:03.194 Current Power State: Power State #0 00:08:03.194 Power State #0: 00:08:03.194 Max Power: 25.00 W 00:08:03.194 Non-Operational State: Operational 00:08:03.194 Entry Latency: 16 microseconds 00:08:03.194 Exit Latency: 4 microseconds 00:08:03.194 Relative Read Throughput: 0 00:08:03.194 Relative Read Latency: 0 00:08:03.194 Relative Write Throughput: 0 00:08:03.194 Relative Write Latency: 0 00:08:03.194 Idle Power: Not Reported 00:08:03.194 Active Power: Not Reported 00:08:03.194 Non-Operational Permissive Mode: Not Supported 00:08:03.194 00:08:03.194 Health Information 00:08:03.194 ================== 00:08:03.194 Critical Warnings: 00:08:03.194 Available Spare Space: OK 00:08:03.195 Temperature: OK 00:08:03.195 Device Reliability: OK 00:08:03.195 Read Only: No 00:08:03.195 Volatile Memory Backup: OK 00:08:03.195 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.195 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.195 Available Spare: 0% 00:08:03.195 Available Spare Threshold: 0% 00:08:03.195 Life Percentage Used: 0% 00:08:03.195 Data Units Read: 736 00:08:03.195 Data Units Written: 664 00:08:03.195 Host Read Commands: 38528 00:08:03.195 Host Write Commands: 38314 00:08:03.195 Controller Busy Time: 0 minutes 00:08:03.195 Power Cycles: 0 00:08:03.195 Power On Hours: 0 hours 00:08:03.195 Unsafe Shutdowns: 0 00:08:03.195 Unrecoverable Media Errors: 0 00:08:03.195 Lifetime Error Log Entries: 0 00:08:03.195 Warning Temperature Time: 0 minutes 00:08:03.195 Critical Temperature Time: 0 minutes 00:08:03.195 00:08:03.195 Number of Queues 00:08:03.195 ================ 00:08:03.195 Number of I/O Submission Queues: 64 00:08:03.195 Number of I/O Completion Queues: 64 00:08:03.195 00:08:03.195 ZNS Specific Controller Data 00:08:03.195 ============================ 00:08:03.195 Zone Append Size Limit: 0 00:08:03.195 00:08:03.195 00:08:03.195 Active Namespaces 00:08:03.195 ================= 00:08:03.195 Namespace ID:1 00:08:03.195 Error Recovery Timeout: Unlimited 00:08:03.195 Command Set Identifier: NVM (00h) 00:08:03.195 Deallocate: Supported 00:08:03.195 Deallocated/Unwritten Error: Supported 00:08:03.195 Deallocated Read Value: All 0x00 00:08:03.195 Deallocate in Write Zeroes: Not Supported 00:08:03.195 Deallocated Guard Field: 0xFFFF 00:08:03.195 Flush: Supported 00:08:03.195 Reservation: Not Supported 00:08:03.195 Metadata Transferred as: Separate Metadata Buffer 00:08:03.195 Namespace Sharing Capabilities: Private 00:08:03.195 Size (in LBAs): 1548666 (5GiB) 00:08:03.195 Capacity (in LBAs): 1548666 (5GiB) 00:08:03.195 Utilization (in LBAs): 1548666 (5GiB) 00:08:03.195 Thin Provisioning: Not Supported 00:08:03.195 Per-NS Atomic Units: No 00:08:03.195 Maximum Single Source Range Length: 128 00:08:03.195 Maximum Copy Length: 128 00:08:03.195 Maximum Source Range Count: 128 00:08:03.195 NGUID/EUI64 Never Reused: No 00:08:03.195 Namespace Write Protected: No 00:08:03.195 Number of LBA Formats: 8 00:08:03.195 Current LBA Format: LBA Format #07 00:08:03.195 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.195 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.195 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.195 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.195 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.195 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.195 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.195 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.195 00:08:03.195 NVM Specific Namespace Data 00:08:03.195 =========================== 00:08:03.195 Logical Block Storage Tag Mask: 0 00:08:03.195 Protection Information Capabilities: 00:08:03.195 16b Guard Protection Information Storage Tag Support: No 00:08:03.195 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.195 Storage Tag Check Read Support: No 00:08:03.195 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.195 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.195 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.195 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.195 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.195 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.195 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.195 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.195 23:41:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:03.195 23:41:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:03.455 ===================================================== 00:08:03.455 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.455 ===================================================== 00:08:03.455 Controller Capabilities/Features 00:08:03.455 ================================ 00:08:03.455 Vendor ID: 1b36 00:08:03.455 Subsystem Vendor ID: 1af4 00:08:03.455 Serial Number: 12341 00:08:03.455 Model Number: QEMU NVMe Ctrl 00:08:03.455 Firmware Version: 8.0.0 00:08:03.455 Recommended Arb Burst: 6 00:08:03.455 IEEE OUI Identifier: 00 54 52 00:08:03.455 Multi-path I/O 00:08:03.455 May have multiple subsystem ports: No 00:08:03.455 May have multiple controllers: No 00:08:03.455 Associated with SR-IOV VF: No 00:08:03.455 Max Data Transfer Size: 524288 00:08:03.455 Max Number of Namespaces: 256 00:08:03.455 Max Number of I/O Queues: 64 00:08:03.455 NVMe Specification Version (VS): 1.4 00:08:03.455 NVMe Specification Version (Identify): 1.4 00:08:03.455 Maximum Queue Entries: 2048 00:08:03.455 Contiguous Queues Required: Yes 00:08:03.455 Arbitration Mechanisms Supported 00:08:03.455 Weighted Round Robin: Not Supported 00:08:03.455 Vendor Specific: Not Supported 00:08:03.455 Reset Timeout: 7500 ms 00:08:03.455 Doorbell Stride: 4 bytes 00:08:03.455 NVM Subsystem Reset: Not Supported 00:08:03.455 Command Sets Supported 00:08:03.455 NVM Command Set: Supported 00:08:03.455 Boot Partition: Not Supported 00:08:03.455 Memory Page Size Minimum: 4096 bytes 00:08:03.455 Memory Page Size Maximum: 65536 bytes 00:08:03.455 Persistent Memory Region: Not Supported 00:08:03.455 Optional Asynchronous Events Supported 00:08:03.455 Namespace Attribute Notices: Supported 00:08:03.455 Firmware Activation Notices: Not Supported 00:08:03.455 ANA Change Notices: Not Supported 00:08:03.455 PLE Aggregate Log Change Notices: Not Supported 00:08:03.455 LBA Status Info Alert Notices: Not Supported 00:08:03.455 EGE Aggregate Log Change Notices: Not Supported 00:08:03.455 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.455 Zone Descriptor Change Notices: Not Supported 00:08:03.455 Discovery Log Change Notices: Not Supported 00:08:03.455 Controller Attributes 00:08:03.455 128-bit Host Identifier: Not Supported 00:08:03.455 Non-Operational Permissive Mode: Not Supported 00:08:03.455 NVM Sets: Not Supported 00:08:03.455 Read Recovery Levels: Not Supported 00:08:03.455 Endurance Groups: Not Supported 00:08:03.455 Predictable Latency Mode: Not Supported 00:08:03.455 Traffic Based Keep ALive: Not Supported 00:08:03.455 Namespace Granularity: Not Supported 00:08:03.455 SQ Associations: Not Supported 00:08:03.455 UUID List: Not Supported 00:08:03.455 Multi-Domain Subsystem: Not Supported 00:08:03.455 Fixed Capacity Management: Not Supported 00:08:03.455 Variable Capacity Management: Not Supported 00:08:03.455 Delete Endurance Group: Not Supported 00:08:03.455 Delete NVM Set: Not Supported 00:08:03.455 Extended LBA Formats Supported: Supported 00:08:03.455 Flexible Data Placement Supported: Not Supported 00:08:03.455 00:08:03.455 Controller Memory Buffer Support 00:08:03.455 ================================ 00:08:03.455 Supported: No 00:08:03.455 00:08:03.455 Persistent Memory Region Support 00:08:03.455 ================================ 00:08:03.455 Supported: No 00:08:03.455 00:08:03.455 Admin Command Set Attributes 00:08:03.455 ============================ 00:08:03.455 Security Send/Receive: Not Supported 00:08:03.455 Format NVM: Supported 00:08:03.455 Firmware Activate/Download: Not Supported 00:08:03.455 Namespace Management: Supported 00:08:03.455 Device Self-Test: Not Supported 00:08:03.455 Directives: Supported 00:08:03.455 NVMe-MI: Not Supported 00:08:03.455 Virtualization Management: Not Supported 00:08:03.456 Doorbell Buffer Config: Supported 00:08:03.456 Get LBA Status Capability: Not Supported 00:08:03.456 Command & Feature Lockdown Capability: Not Supported 00:08:03.456 Abort Command Limit: 4 00:08:03.456 Async Event Request Limit: 4 00:08:03.456 Number of Firmware Slots: N/A 00:08:03.456 Firmware Slot 1 Read-Only: N/A 00:08:03.456 Firmware Activation Without Reset: N/A 00:08:03.456 Multiple Update Detection Support: N/A 00:08:03.456 Firmware Update Granularity: No Information Provided 00:08:03.456 Per-Namespace SMART Log: Yes 00:08:03.456 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.456 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:03.456 Command Effects Log Page: Supported 00:08:03.456 Get Log Page Extended Data: Supported 00:08:03.456 Telemetry Log Pages: Not Supported 00:08:03.456 Persistent Event Log Pages: Not Supported 00:08:03.456 Supported Log Pages Log Page: May Support 00:08:03.456 Commands Supported & Effects Log Page: Not Supported 00:08:03.456 Feature Identifiers & Effects Log Page:May Support 00:08:03.456 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.456 Data Area 4 for Telemetry Log: Not Supported 00:08:03.456 Error Log Page Entries Supported: 1 00:08:03.456 Keep Alive: Not Supported 00:08:03.456 00:08:03.456 NVM Command Set Attributes 00:08:03.456 ========================== 00:08:03.456 Submission Queue Entry Size 00:08:03.456 Max: 64 00:08:03.456 Min: 64 00:08:03.456 Completion Queue Entry Size 00:08:03.456 Max: 16 00:08:03.456 Min: 16 00:08:03.456 Number of Namespaces: 256 00:08:03.456 Compare Command: Supported 00:08:03.456 Write Uncorrectable Command: Not Supported 00:08:03.456 Dataset Management Command: Supported 00:08:03.456 Write Zeroes Command: Supported 00:08:03.456 Set Features Save Field: Supported 00:08:03.456 Reservations: Not Supported 00:08:03.456 Timestamp: Supported 00:08:03.456 Copy: Supported 00:08:03.456 Volatile Write Cache: Present 00:08:03.456 Atomic Write Unit (Normal): 1 00:08:03.456 Atomic Write Unit (PFail): 1 00:08:03.456 Atomic Compare & Write Unit: 1 00:08:03.456 Fused Compare & Write: Not Supported 00:08:03.456 Scatter-Gather List 00:08:03.456 SGL Command Set: Supported 00:08:03.456 SGL Keyed: Not Supported 00:08:03.456 SGL Bit Bucket Descriptor: Not Supported 00:08:03.456 SGL Metadata Pointer: Not Supported 00:08:03.456 Oversized SGL: Not Supported 00:08:03.456 SGL Metadata Address: Not Supported 00:08:03.456 SGL Offset: Not Supported 00:08:03.456 Transport SGL Data Block: Not Supported 00:08:03.456 Replay Protected Memory Block: Not Supported 00:08:03.456 00:08:03.456 Firmware Slot Information 00:08:03.456 ========================= 00:08:03.456 Active slot: 1 00:08:03.456 Slot 1 Firmware Revision: 1.0 00:08:03.456 00:08:03.456 00:08:03.456 Commands Supported and Effects 00:08:03.456 ============================== 00:08:03.456 Admin Commands 00:08:03.456 -------------- 00:08:03.456 Delete I/O Submission Queue (00h): Supported 00:08:03.456 Create I/O Submission Queue (01h): Supported 00:08:03.456 Get Log Page (02h): Supported 00:08:03.456 Delete I/O Completion Queue (04h): Supported 00:08:03.456 Create I/O Completion Queue (05h): Supported 00:08:03.456 Identify (06h): Supported 00:08:03.456 Abort (08h): Supported 00:08:03.456 Set Features (09h): Supported 00:08:03.456 Get Features (0Ah): Supported 00:08:03.456 Asynchronous Event Request (0Ch): Supported 00:08:03.456 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.456 Directive Send (19h): Supported 00:08:03.456 Directive Receive (1Ah): Supported 00:08:03.456 Virtualization Management (1Ch): Supported 00:08:03.456 Doorbell Buffer Config (7Ch): Supported 00:08:03.456 Format NVM (80h): Supported LBA-Change 00:08:03.456 I/O Commands 00:08:03.456 ------------ 00:08:03.456 Flush (00h): Supported LBA-Change 00:08:03.456 Write (01h): Supported LBA-Change 00:08:03.456 Read (02h): Supported 00:08:03.456 Compare (05h): Supported 00:08:03.456 Write Zeroes (08h): Supported LBA-Change 00:08:03.456 Dataset Management (09h): Supported LBA-Change 00:08:03.456 Unknown (0Ch): Supported 00:08:03.456 Unknown (12h): Supported 00:08:03.456 Copy (19h): Supported LBA-Change 00:08:03.456 Unknown (1Dh): Supported LBA-Change 00:08:03.456 00:08:03.456 Error Log 00:08:03.456 ========= 00:08:03.456 00:08:03.456 Arbitration 00:08:03.456 =========== 00:08:03.456 Arbitration Burst: no limit 00:08:03.456 00:08:03.456 Power Management 00:08:03.456 ================ 00:08:03.456 Number of Power States: 1 00:08:03.456 Current Power State: Power State #0 00:08:03.456 Power State #0: 00:08:03.456 Max Power: 25.00 W 00:08:03.456 Non-Operational State: Operational 00:08:03.456 Entry Latency: 16 microseconds 00:08:03.456 Exit Latency: 4 microseconds 00:08:03.456 Relative Read Throughput: 0 00:08:03.456 Relative Read Latency: 0 00:08:03.456 Relative Write Throughput: 0 00:08:03.456 Relative Write Latency: 0 00:08:03.456 Idle Power: Not Reported 00:08:03.456 Active Power: Not Reported 00:08:03.456 Non-Operational Permissive Mode: Not Supported 00:08:03.456 00:08:03.456 Health Information 00:08:03.456 ================== 00:08:03.456 Critical Warnings: 00:08:03.456 Available Spare Space: OK 00:08:03.456 Temperature: OK 00:08:03.456 Device Reliability: OK 00:08:03.456 Read Only: No 00:08:03.456 Volatile Memory Backup: OK 00:08:03.456 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.456 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.456 Available Spare: 0% 00:08:03.456 Available Spare Threshold: 0% 00:08:03.456 Life Percentage Used: 0% 00:08:03.456 Data Units Read: 1078 00:08:03.456 Data Units Written: 946 00:08:03.456 Host Read Commands: 53971 00:08:03.456 Host Write Commands: 52760 00:08:03.456 Controller Busy Time: 0 minutes 00:08:03.456 Power Cycles: 0 00:08:03.456 Power On Hours: 0 hours 00:08:03.456 Unsafe Shutdowns: 0 00:08:03.456 Unrecoverable Media Errors: 0 00:08:03.456 Lifetime Error Log Entries: 0 00:08:03.456 Warning Temperature Time: 0 minutes 00:08:03.456 Critical Temperature Time: 0 minutes 00:08:03.456 00:08:03.456 Number of Queues 00:08:03.456 ================ 00:08:03.456 Number of I/O Submission Queues: 64 00:08:03.456 Number of I/O Completion Queues: 64 00:08:03.456 00:08:03.456 ZNS Specific Controller Data 00:08:03.456 ============================ 00:08:03.456 Zone Append Size Limit: 0 00:08:03.456 00:08:03.456 00:08:03.456 Active Namespaces 00:08:03.457 ================= 00:08:03.457 Namespace ID:1 00:08:03.457 Error Recovery Timeout: Unlimited 00:08:03.457 Command Set Identifier: NVM (00h) 00:08:03.457 Deallocate: Supported 00:08:03.457 Deallocated/Unwritten Error: Supported 00:08:03.457 Deallocated Read Value: All 0x00 00:08:03.457 Deallocate in Write Zeroes: Not Supported 00:08:03.457 Deallocated Guard Field: 0xFFFF 00:08:03.457 Flush: Supported 00:08:03.457 Reservation: Not Supported 00:08:03.457 Namespace Sharing Capabilities: Private 00:08:03.457 Size (in LBAs): 1310720 (5GiB) 00:08:03.457 Capacity (in LBAs): 1310720 (5GiB) 00:08:03.457 Utilization (in LBAs): 1310720 (5GiB) 00:08:03.457 Thin Provisioning: Not Supported 00:08:03.457 Per-NS Atomic Units: No 00:08:03.457 Maximum Single Source Range Length: 128 00:08:03.457 Maximum Copy Length: 128 00:08:03.457 Maximum Source Range Count: 128 00:08:03.457 NGUID/EUI64 Never Reused: No 00:08:03.457 Namespace Write Protected: No 00:08:03.457 Number of LBA Formats: 8 00:08:03.457 Current LBA Format: LBA Format #04 00:08:03.457 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.457 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.457 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.457 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.457 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.457 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.457 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.457 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.457 00:08:03.457 NVM Specific Namespace Data 00:08:03.457 =========================== 00:08:03.457 Logical Block Storage Tag Mask: 0 00:08:03.457 Protection Information Capabilities: 00:08:03.457 16b Guard Protection Information Storage Tag Support: No 00:08:03.457 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.457 Storage Tag Check Read Support: No 00:08:03.457 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.457 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.457 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.457 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.457 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.457 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.457 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.457 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.457 23:41:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:03.457 23:41:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:03.717 ===================================================== 00:08:03.718 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.718 ===================================================== 00:08:03.718 Controller Capabilities/Features 00:08:03.718 ================================ 00:08:03.718 Vendor ID: 1b36 00:08:03.718 Subsystem Vendor ID: 1af4 00:08:03.718 Serial Number: 12342 00:08:03.718 Model Number: QEMU NVMe Ctrl 00:08:03.718 Firmware Version: 8.0.0 00:08:03.718 Recommended Arb Burst: 6 00:08:03.718 IEEE OUI Identifier: 00 54 52 00:08:03.718 Multi-path I/O 00:08:03.718 May have multiple subsystem ports: No 00:08:03.718 May have multiple controllers: No 00:08:03.718 Associated with SR-IOV VF: No 00:08:03.718 Max Data Transfer Size: 524288 00:08:03.718 Max Number of Namespaces: 256 00:08:03.718 Max Number of I/O Queues: 64 00:08:03.718 NVMe Specification Version (VS): 1.4 00:08:03.718 NVMe Specification Version (Identify): 1.4 00:08:03.718 Maximum Queue Entries: 2048 00:08:03.718 Contiguous Queues Required: Yes 00:08:03.718 Arbitration Mechanisms Supported 00:08:03.718 Weighted Round Robin: Not Supported 00:08:03.718 Vendor Specific: Not Supported 00:08:03.718 Reset Timeout: 7500 ms 00:08:03.718 Doorbell Stride: 4 bytes 00:08:03.718 NVM Subsystem Reset: Not Supported 00:08:03.718 Command Sets Supported 00:08:03.718 NVM Command Set: Supported 00:08:03.718 Boot Partition: Not Supported 00:08:03.718 Memory Page Size Minimum: 4096 bytes 00:08:03.718 Memory Page Size Maximum: 65536 bytes 00:08:03.718 Persistent Memory Region: Not Supported 00:08:03.718 Optional Asynchronous Events Supported 00:08:03.718 Namespace Attribute Notices: Supported 00:08:03.718 Firmware Activation Notices: Not Supported 00:08:03.718 ANA Change Notices: Not Supported 00:08:03.718 PLE Aggregate Log Change Notices: Not Supported 00:08:03.718 LBA Status Info Alert Notices: Not Supported 00:08:03.718 EGE Aggregate Log Change Notices: Not Supported 00:08:03.718 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.718 Zone Descriptor Change Notices: Not Supported 00:08:03.718 Discovery Log Change Notices: Not Supported 00:08:03.718 Controller Attributes 00:08:03.718 128-bit Host Identifier: Not Supported 00:08:03.718 Non-Operational Permissive Mode: Not Supported 00:08:03.718 NVM Sets: Not Supported 00:08:03.718 Read Recovery Levels: Not Supported 00:08:03.718 Endurance Groups: Not Supported 00:08:03.718 Predictable Latency Mode: Not Supported 00:08:03.718 Traffic Based Keep ALive: Not Supported 00:08:03.718 Namespace Granularity: Not Supported 00:08:03.718 SQ Associations: Not Supported 00:08:03.718 UUID List: Not Supported 00:08:03.718 Multi-Domain Subsystem: Not Supported 00:08:03.718 Fixed Capacity Management: Not Supported 00:08:03.718 Variable Capacity Management: Not Supported 00:08:03.718 Delete Endurance Group: Not Supported 00:08:03.718 Delete NVM Set: Not Supported 00:08:03.718 Extended LBA Formats Supported: Supported 00:08:03.718 Flexible Data Placement Supported: Not Supported 00:08:03.718 00:08:03.718 Controller Memory Buffer Support 00:08:03.718 ================================ 00:08:03.718 Supported: No 00:08:03.718 00:08:03.718 Persistent Memory Region Support 00:08:03.718 ================================ 00:08:03.718 Supported: No 00:08:03.718 00:08:03.718 Admin Command Set Attributes 00:08:03.718 ============================ 00:08:03.718 Security Send/Receive: Not Supported 00:08:03.718 Format NVM: Supported 00:08:03.718 Firmware Activate/Download: Not Supported 00:08:03.718 Namespace Management: Supported 00:08:03.718 Device Self-Test: Not Supported 00:08:03.718 Directives: Supported 00:08:03.718 NVMe-MI: Not Supported 00:08:03.718 Virtualization Management: Not Supported 00:08:03.718 Doorbell Buffer Config: Supported 00:08:03.718 Get LBA Status Capability: Not Supported 00:08:03.718 Command & Feature Lockdown Capability: Not Supported 00:08:03.718 Abort Command Limit: 4 00:08:03.718 Async Event Request Limit: 4 00:08:03.718 Number of Firmware Slots: N/A 00:08:03.718 Firmware Slot 1 Read-Only: N/A 00:08:03.718 Firmware Activation Without Reset: N/A 00:08:03.718 Multiple Update Detection Support: N/A 00:08:03.718 Firmware Update Granularity: No Information Provided 00:08:03.718 Per-Namespace SMART Log: Yes 00:08:03.718 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.718 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:03.718 Command Effects Log Page: Supported 00:08:03.718 Get Log Page Extended Data: Supported 00:08:03.718 Telemetry Log Pages: Not Supported 00:08:03.718 Persistent Event Log Pages: Not Supported 00:08:03.718 Supported Log Pages Log Page: May Support 00:08:03.718 Commands Supported & Effects Log Page: Not Supported 00:08:03.718 Feature Identifiers & Effects Log Page:May Support 00:08:03.718 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.718 Data Area 4 for Telemetry Log: Not Supported 00:08:03.718 Error Log Page Entries Supported: 1 00:08:03.718 Keep Alive: Not Supported 00:08:03.718 00:08:03.718 NVM Command Set Attributes 00:08:03.718 ========================== 00:08:03.718 Submission Queue Entry Size 00:08:03.718 Max: 64 00:08:03.718 Min: 64 00:08:03.718 Completion Queue Entry Size 00:08:03.718 Max: 16 00:08:03.718 Min: 16 00:08:03.718 Number of Namespaces: 256 00:08:03.718 Compare Command: Supported 00:08:03.718 Write Uncorrectable Command: Not Supported 00:08:03.718 Dataset Management Command: Supported 00:08:03.718 Write Zeroes Command: Supported 00:08:03.718 Set Features Save Field: Supported 00:08:03.718 Reservations: Not Supported 00:08:03.718 Timestamp: Supported 00:08:03.718 Copy: Supported 00:08:03.718 Volatile Write Cache: Present 00:08:03.718 Atomic Write Unit (Normal): 1 00:08:03.718 Atomic Write Unit (PFail): 1 00:08:03.718 Atomic Compare & Write Unit: 1 00:08:03.718 Fused Compare & Write: Not Supported 00:08:03.718 Scatter-Gather List 00:08:03.718 SGL Command Set: Supported 00:08:03.718 SGL Keyed: Not Supported 00:08:03.718 SGL Bit Bucket Descriptor: Not Supported 00:08:03.718 SGL Metadata Pointer: Not Supported 00:08:03.718 Oversized SGL: Not Supported 00:08:03.718 SGL Metadata Address: Not Supported 00:08:03.718 SGL Offset: Not Supported 00:08:03.718 Transport SGL Data Block: Not Supported 00:08:03.718 Replay Protected Memory Block: Not Supported 00:08:03.718 00:08:03.718 Firmware Slot Information 00:08:03.718 ========================= 00:08:03.718 Active slot: 1 00:08:03.718 Slot 1 Firmware Revision: 1.0 00:08:03.718 00:08:03.718 00:08:03.718 Commands Supported and Effects 00:08:03.718 ============================== 00:08:03.719 Admin Commands 00:08:03.719 -------------- 00:08:03.719 Delete I/O Submission Queue (00h): Supported 00:08:03.719 Create I/O Submission Queue (01h): Supported 00:08:03.719 Get Log Page (02h): Supported 00:08:03.719 Delete I/O Completion Queue (04h): Supported 00:08:03.719 Create I/O Completion Queue (05h): Supported 00:08:03.719 Identify (06h): Supported 00:08:03.719 Abort (08h): Supported 00:08:03.719 Set Features (09h): Supported 00:08:03.719 Get Features (0Ah): Supported 00:08:03.719 Asynchronous Event Request (0Ch): Supported 00:08:03.719 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.719 Directive Send (19h): Supported 00:08:03.719 Directive Receive (1Ah): Supported 00:08:03.719 Virtualization Management (1Ch): Supported 00:08:03.719 Doorbell Buffer Config (7Ch): Supported 00:08:03.719 Format NVM (80h): Supported LBA-Change 00:08:03.719 I/O Commands 00:08:03.719 ------------ 00:08:03.719 Flush (00h): Supported LBA-Change 00:08:03.719 Write (01h): Supported LBA-Change 00:08:03.719 Read (02h): Supported 00:08:03.719 Compare (05h): Supported 00:08:03.719 Write Zeroes (08h): Supported LBA-Change 00:08:03.719 Dataset Management (09h): Supported LBA-Change 00:08:03.719 Unknown (0Ch): Supported 00:08:03.719 Unknown (12h): Supported 00:08:03.719 Copy (19h): Supported LBA-Change 00:08:03.719 Unknown (1Dh): Supported LBA-Change 00:08:03.719 00:08:03.719 Error Log 00:08:03.719 ========= 00:08:03.719 00:08:03.719 Arbitration 00:08:03.719 =========== 00:08:03.719 Arbitration Burst: no limit 00:08:03.719 00:08:03.719 Power Management 00:08:03.719 ================ 00:08:03.719 Number of Power States: 1 00:08:03.719 Current Power State: Power State #0 00:08:03.719 Power State #0: 00:08:03.719 Max Power: 25.00 W 00:08:03.719 Non-Operational State: Operational 00:08:03.719 Entry Latency: 16 microseconds 00:08:03.719 Exit Latency: 4 microseconds 00:08:03.719 Relative Read Throughput: 0 00:08:03.719 Relative Read Latency: 0 00:08:03.719 Relative Write Throughput: 0 00:08:03.719 Relative Write Latency: 0 00:08:03.719 Idle Power: Not Reported 00:08:03.719 Active Power: Not Reported 00:08:03.719 Non-Operational Permissive Mode: Not Supported 00:08:03.719 00:08:03.719 Health Information 00:08:03.719 ================== 00:08:03.719 Critical Warnings: 00:08:03.719 Available Spare Space: OK 00:08:03.719 Temperature: OK 00:08:03.719 Device Reliability: OK 00:08:03.719 Read Only: No 00:08:03.719 Volatile Memory Backup: OK 00:08:03.719 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.719 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.719 Available Spare: 0% 00:08:03.719 Available Spare Threshold: 0% 00:08:03.719 Life Percentage Used: 0% 00:08:03.719 Data Units Read: 2383 00:08:03.719 Data Units Written: 2170 00:08:03.719 Host Read Commands: 117674 00:08:03.719 Host Write Commands: 115944 00:08:03.719 Controller Busy Time: 0 minutes 00:08:03.719 Power Cycles: 0 00:08:03.719 Power On Hours: 0 hours 00:08:03.719 Unsafe Shutdowns: 0 00:08:03.719 Unrecoverable Media Errors: 0 00:08:03.719 Lifetime Error Log Entries: 0 00:08:03.719 Warning Temperature Time: 0 minutes 00:08:03.719 Critical Temperature Time: 0 minutes 00:08:03.719 00:08:03.719 Number of Queues 00:08:03.719 ================ 00:08:03.719 Number of I/O Submission Queues: 64 00:08:03.719 Number of I/O Completion Queues: 64 00:08:03.719 00:08:03.719 ZNS Specific Controller Data 00:08:03.719 ============================ 00:08:03.719 Zone Append Size Limit: 0 00:08:03.719 00:08:03.719 00:08:03.719 Active Namespaces 00:08:03.719 ================= 00:08:03.719 Namespace ID:1 00:08:03.719 Error Recovery Timeout: Unlimited 00:08:03.719 Command Set Identifier: NVM (00h) 00:08:03.719 Deallocate: Supported 00:08:03.719 Deallocated/Unwritten Error: Supported 00:08:03.719 Deallocated Read Value: All 0x00 00:08:03.719 Deallocate in Write Zeroes: Not Supported 00:08:03.719 Deallocated Guard Field: 0xFFFF 00:08:03.719 Flush: Supported 00:08:03.719 Reservation: Not Supported 00:08:03.719 Namespace Sharing Capabilities: Private 00:08:03.719 Size (in LBAs): 1048576 (4GiB) 00:08:03.719 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.719 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.719 Thin Provisioning: Not Supported 00:08:03.719 Per-NS Atomic Units: No 00:08:03.719 Maximum Single Source Range Length: 128 00:08:03.719 Maximum Copy Length: 128 00:08:03.719 Maximum Source Range Count: 128 00:08:03.719 NGUID/EUI64 Never Reused: No 00:08:03.719 Namespace Write Protected: No 00:08:03.719 Number of LBA Formats: 8 00:08:03.719 Current LBA Format: LBA Format #04 00:08:03.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.719 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.719 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.719 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.719 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.719 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.719 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.719 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.719 00:08:03.719 NVM Specific Namespace Data 00:08:03.719 =========================== 00:08:03.719 Logical Block Storage Tag Mask: 0 00:08:03.719 Protection Information Capabilities: 00:08:03.719 16b Guard Protection Information Storage Tag Support: No 00:08:03.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.719 Storage Tag Check Read Support: No 00:08:03.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.719 Namespace ID:2 00:08:03.719 Error Recovery Timeout: Unlimited 00:08:03.719 Command Set Identifier: NVM (00h) 00:08:03.719 Deallocate: Supported 00:08:03.719 Deallocated/Unwritten Error: Supported 00:08:03.719 Deallocated Read Value: All 0x00 00:08:03.719 Deallocate in Write Zeroes: Not Supported 00:08:03.719 Deallocated Guard Field: 0xFFFF 00:08:03.720 Flush: Supported 00:08:03.720 Reservation: Not Supported 00:08:03.720 Namespace Sharing Capabilities: Private 00:08:03.720 Size (in LBAs): 1048576 (4GiB) 00:08:03.720 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.720 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.720 Thin Provisioning: Not Supported 00:08:03.720 Per-NS Atomic Units: No 00:08:03.720 Maximum Single Source Range Length: 128 00:08:03.720 Maximum Copy Length: 128 00:08:03.720 Maximum Source Range Count: 128 00:08:03.720 NGUID/EUI64 Never Reused: No 00:08:03.720 Namespace Write Protected: No 00:08:03.720 Number of LBA Formats: 8 00:08:03.720 Current LBA Format: LBA Format #04 00:08:03.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.720 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.720 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.720 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.720 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.720 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.720 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.720 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.720 00:08:03.720 NVM Specific Namespace Data 00:08:03.720 =========================== 00:08:03.720 Logical Block Storage Tag Mask: 0 00:08:03.720 Protection Information Capabilities: 00:08:03.720 16b Guard Protection Information Storage Tag Support: No 00:08:03.720 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.720 Storage Tag Check Read Support: No 00:08:03.720 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Namespace ID:3 00:08:03.720 Error Recovery Timeout: Unlimited 00:08:03.720 Command Set Identifier: NVM (00h) 00:08:03.720 Deallocate: Supported 00:08:03.720 Deallocated/Unwritten Error: Supported 00:08:03.720 Deallocated Read Value: All 0x00 00:08:03.720 Deallocate in Write Zeroes: Not Supported 00:08:03.720 Deallocated Guard Field: 0xFFFF 00:08:03.720 Flush: Supported 00:08:03.720 Reservation: Not Supported 00:08:03.720 Namespace Sharing Capabilities: Private 00:08:03.720 Size (in LBAs): 1048576 (4GiB) 00:08:03.720 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.720 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.720 Thin Provisioning: Not Supported 00:08:03.720 Per-NS Atomic Units: No 00:08:03.720 Maximum Single Source Range Length: 128 00:08:03.720 Maximum Copy Length: 128 00:08:03.720 Maximum Source Range Count: 128 00:08:03.720 NGUID/EUI64 Never Reused: No 00:08:03.720 Namespace Write Protected: No 00:08:03.720 Number of LBA Formats: 8 00:08:03.720 Current LBA Format: LBA Format #04 00:08:03.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.720 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.720 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.720 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.720 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.720 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.720 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.720 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.720 00:08:03.720 NVM Specific Namespace Data 00:08:03.720 =========================== 00:08:03.720 Logical Block Storage Tag Mask: 0 00:08:03.720 Protection Information Capabilities: 00:08:03.720 16b Guard Protection Information Storage Tag Support: No 00:08:03.720 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.720 Storage Tag Check Read Support: No 00:08:03.720 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.720 23:41:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:03.720 23:41:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:03.720 ===================================================== 00:08:03.720 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.720 ===================================================== 00:08:03.720 Controller Capabilities/Features 00:08:03.720 ================================ 00:08:03.720 Vendor ID: 1b36 00:08:03.720 Subsystem Vendor ID: 1af4 00:08:03.720 Serial Number: 12343 00:08:03.720 Model Number: QEMU NVMe Ctrl 00:08:03.720 Firmware Version: 8.0.0 00:08:03.720 Recommended Arb Burst: 6 00:08:03.720 IEEE OUI Identifier: 00 54 52 00:08:03.720 Multi-path I/O 00:08:03.720 May have multiple subsystem ports: No 00:08:03.720 May have multiple controllers: Yes 00:08:03.720 Associated with SR-IOV VF: No 00:08:03.720 Max Data Transfer Size: 524288 00:08:03.720 Max Number of Namespaces: 256 00:08:03.720 Max Number of I/O Queues: 64 00:08:03.720 NVMe Specification Version (VS): 1.4 00:08:03.720 NVMe Specification Version (Identify): 1.4 00:08:03.720 Maximum Queue Entries: 2048 00:08:03.720 Contiguous Queues Required: Yes 00:08:03.720 Arbitration Mechanisms Supported 00:08:03.720 Weighted Round Robin: Not Supported 00:08:03.720 Vendor Specific: Not Supported 00:08:03.721 Reset Timeout: 7500 ms 00:08:03.721 Doorbell Stride: 4 bytes 00:08:03.721 NVM Subsystem Reset: Not Supported 00:08:03.721 Command Sets Supported 00:08:03.721 NVM Command Set: Supported 00:08:03.721 Boot Partition: Not Supported 00:08:03.721 Memory Page Size Minimum: 4096 bytes 00:08:03.721 Memory Page Size Maximum: 65536 bytes 00:08:03.721 Persistent Memory Region: Not Supported 00:08:03.721 Optional Asynchronous Events Supported 00:08:03.721 Namespace Attribute Notices: Supported 00:08:03.721 Firmware Activation Notices: Not Supported 00:08:03.721 ANA Change Notices: Not Supported 00:08:03.721 PLE Aggregate Log Change Notices: Not Supported 00:08:03.721 LBA Status Info Alert Notices: Not Supported 00:08:03.721 EGE Aggregate Log Change Notices: Not Supported 00:08:03.721 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.721 Zone Descriptor Change Notices: Not Supported 00:08:03.721 Discovery Log Change Notices: Not Supported 00:08:03.721 Controller Attributes 00:08:03.721 128-bit Host Identifier: Not Supported 00:08:03.721 Non-Operational Permissive Mode: Not Supported 00:08:03.721 NVM Sets: Not Supported 00:08:03.721 Read Recovery Levels: Not Supported 00:08:03.721 Endurance Groups: Supported 00:08:03.721 Predictable Latency Mode: Not Supported 00:08:03.721 Traffic Based Keep ALive: Not Supported 00:08:03.721 Namespace Granularity: Not Supported 00:08:03.721 SQ Associations: Not Supported 00:08:03.721 UUID List: Not Supported 00:08:03.721 Multi-Domain Subsystem: Not Supported 00:08:03.721 Fixed Capacity Management: Not Supported 00:08:03.721 Variable Capacity Management: Not Supported 00:08:03.721 Delete Endurance Group: Not Supported 00:08:03.721 Delete NVM Set: Not Supported 00:08:03.721 Extended LBA Formats Supported: Supported 00:08:03.721 Flexible Data Placement Supported: Supported 00:08:03.721 00:08:03.721 Controller Memory Buffer Support 00:08:03.721 ================================ 00:08:03.721 Supported: No 00:08:03.721 00:08:03.721 Persistent Memory Region Support 00:08:03.721 ================================ 00:08:03.721 Supported: No 00:08:03.721 00:08:03.721 Admin Command Set Attributes 00:08:03.721 ============================ 00:08:03.721 Security Send/Receive: Not Supported 00:08:03.721 Format NVM: Supported 00:08:03.721 Firmware Activate/Download: Not Supported 00:08:03.721 Namespace Management: Supported 00:08:03.721 Device Self-Test: Not Supported 00:08:03.721 Directives: Supported 00:08:03.721 NVMe-MI: Not Supported 00:08:03.721 Virtualization Management: Not Supported 00:08:03.721 Doorbell Buffer Config: Supported 00:08:03.721 Get LBA Status Capability: Not Supported 00:08:03.721 Command & Feature Lockdown Capability: Not Supported 00:08:03.721 Abort Command Limit: 4 00:08:03.721 Async Event Request Limit: 4 00:08:03.721 Number of Firmware Slots: N/A 00:08:03.721 Firmware Slot 1 Read-Only: N/A 00:08:03.721 Firmware Activation Without Reset: N/A 00:08:03.721 Multiple Update Detection Support: N/A 00:08:03.721 Firmware Update Granularity: No Information Provided 00:08:03.721 Per-Namespace SMART Log: Yes 00:08:03.721 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.721 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:03.721 Command Effects Log Page: Supported 00:08:03.721 Get Log Page Extended Data: Supported 00:08:03.721 Telemetry Log Pages: Not Supported 00:08:03.721 Persistent Event Log Pages: Not Supported 00:08:03.721 Supported Log Pages Log Page: May Support 00:08:03.721 Commands Supported & Effects Log Page: Not Supported 00:08:03.721 Feature Identifiers & Effects Log Page:May Support 00:08:03.721 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.721 Data Area 4 for Telemetry Log: Not Supported 00:08:03.721 Error Log Page Entries Supported: 1 00:08:03.721 Keep Alive: Not Supported 00:08:03.721 00:08:03.721 NVM Command Set Attributes 00:08:03.721 ========================== 00:08:03.721 Submission Queue Entry Size 00:08:03.721 Max: 64 00:08:03.721 Min: 64 00:08:03.721 Completion Queue Entry Size 00:08:03.721 Max: 16 00:08:03.721 Min: 16 00:08:03.721 Number of Namespaces: 256 00:08:03.721 Compare Command: Supported 00:08:03.721 Write Uncorrectable Command: Not Supported 00:08:03.721 Dataset Management Command: Supported 00:08:03.721 Write Zeroes Command: Supported 00:08:03.721 Set Features Save Field: Supported 00:08:03.721 Reservations: Not Supported 00:08:03.721 Timestamp: Supported 00:08:03.721 Copy: Supported 00:08:03.721 Volatile Write Cache: Present 00:08:03.721 Atomic Write Unit (Normal): 1 00:08:03.721 Atomic Write Unit (PFail): 1 00:08:03.721 Atomic Compare & Write Unit: 1 00:08:03.721 Fused Compare & Write: Not Supported 00:08:03.721 Scatter-Gather List 00:08:03.721 SGL Command Set: Supported 00:08:03.721 SGL Keyed: Not Supported 00:08:03.721 SGL Bit Bucket Descriptor: Not Supported 00:08:03.721 SGL Metadata Pointer: Not Supported 00:08:03.721 Oversized SGL: Not Supported 00:08:03.721 SGL Metadata Address: Not Supported 00:08:03.721 SGL Offset: Not Supported 00:08:03.721 Transport SGL Data Block: Not Supported 00:08:03.721 Replay Protected Memory Block: Not Supported 00:08:03.721 00:08:03.721 Firmware Slot Information 00:08:03.721 ========================= 00:08:03.721 Active slot: 1 00:08:03.721 Slot 1 Firmware Revision: 1.0 00:08:03.721 00:08:03.721 00:08:03.721 Commands Supported and Effects 00:08:03.721 ============================== 00:08:03.721 Admin Commands 00:08:03.721 -------------- 00:08:03.721 Delete I/O Submission Queue (00h): Supported 00:08:03.721 Create I/O Submission Queue (01h): Supported 00:08:03.721 Get Log Page (02h): Supported 00:08:03.721 Delete I/O Completion Queue (04h): Supported 00:08:03.721 Create I/O Completion Queue (05h): Supported 00:08:03.721 Identify (06h): Supported 00:08:03.721 Abort (08h): Supported 00:08:03.721 Set Features (09h): Supported 00:08:03.721 Get Features (0Ah): Supported 00:08:03.721 Asynchronous Event Request (0Ch): Supported 00:08:03.721 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.721 Directive Send (19h): Supported 00:08:03.721 Directive Receive (1Ah): Supported 00:08:03.721 Virtualization Management (1Ch): Supported 00:08:03.721 Doorbell Buffer Config (7Ch): Supported 00:08:03.721 Format NVM (80h): Supported LBA-Change 00:08:03.721 I/O Commands 00:08:03.721 ------------ 00:08:03.721 Flush (00h): Supported LBA-Change 00:08:03.721 Write (01h): Supported LBA-Change 00:08:03.721 Read (02h): Supported 00:08:03.721 Compare (05h): Supported 00:08:03.721 Write Zeroes (08h): Supported LBA-Change 00:08:03.721 Dataset Management (09h): Supported LBA-Change 00:08:03.721 Unknown (0Ch): Supported 00:08:03.721 Unknown (12h): Supported 00:08:03.722 Copy (19h): Supported LBA-Change 00:08:03.722 Unknown (1Dh): Supported LBA-Change 00:08:03.722 00:08:03.722 Error Log 00:08:03.722 ========= 00:08:03.722 00:08:03.722 Arbitration 00:08:03.722 =========== 00:08:03.722 Arbitration Burst: no limit 00:08:03.722 00:08:03.722 Power Management 00:08:03.722 ================ 00:08:03.722 Number of Power States: 1 00:08:03.722 Current Power State: Power State #0 00:08:03.722 Power State #0: 00:08:03.722 Max Power: 25.00 W 00:08:03.722 Non-Operational State: Operational 00:08:03.722 Entry Latency: 16 microseconds 00:08:03.722 Exit Latency: 4 microseconds 00:08:03.722 Relative Read Throughput: 0 00:08:03.722 Relative Read Latency: 0 00:08:03.722 Relative Write Throughput: 0 00:08:03.722 Relative Write Latency: 0 00:08:03.722 Idle Power: Not Reported 00:08:03.722 Active Power: Not Reported 00:08:03.722 Non-Operational Permissive Mode: Not Supported 00:08:03.722 00:08:03.722 Health Information 00:08:03.722 ================== 00:08:03.722 Critical Warnings: 00:08:03.722 Available Spare Space: OK 00:08:03.722 Temperature: OK 00:08:03.722 Device Reliability: OK 00:08:03.722 Read Only: No 00:08:03.722 Volatile Memory Backup: OK 00:08:03.722 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.722 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.722 Available Spare: 0% 00:08:03.722 Available Spare Threshold: 0% 00:08:03.722 Life Percentage Used: 0% 00:08:03.722 Data Units Read: 963 00:08:03.722 Data Units Written: 892 00:08:03.722 Host Read Commands: 40598 00:08:03.722 Host Write Commands: 40022 00:08:03.722 Controller Busy Time: 0 minutes 00:08:03.722 Power Cycles: 0 00:08:03.722 Power On Hours: 0 hours 00:08:03.722 Unsafe Shutdowns: 0 00:08:03.722 Unrecoverable Media Errors: 0 00:08:03.722 Lifetime Error Log Entries: 0 00:08:03.722 Warning Temperature Time: 0 minutes 00:08:03.722 Critical Temperature Time: 0 minutes 00:08:03.722 00:08:03.722 Number of Queues 00:08:03.722 ================ 00:08:03.722 Number of I/O Submission Queues: 64 00:08:03.722 Number of I/O Completion Queues: 64 00:08:03.722 00:08:03.722 ZNS Specific Controller Data 00:08:03.722 ============================ 00:08:03.722 Zone Append Size Limit: 0 00:08:03.722 00:08:03.722 00:08:03.722 Active Namespaces 00:08:03.722 ================= 00:08:03.722 Namespace ID:1 00:08:03.722 Error Recovery Timeout: Unlimited 00:08:03.722 Command Set Identifier: NVM (00h) 00:08:03.722 Deallocate: Supported 00:08:03.722 Deallocated/Unwritten Error: Supported 00:08:03.722 Deallocated Read Value: All 0x00 00:08:03.722 Deallocate in Write Zeroes: Not Supported 00:08:03.722 Deallocated Guard Field: 0xFFFF 00:08:03.722 Flush: Supported 00:08:03.722 Reservation: Not Supported 00:08:03.722 Namespace Sharing Capabilities: Multiple Controllers 00:08:03.722 Size (in LBAs): 262144 (1GiB) 00:08:03.722 Capacity (in LBAs): 262144 (1GiB) 00:08:03.722 Utilization (in LBAs): 262144 (1GiB) 00:08:03.722 Thin Provisioning: Not Supported 00:08:03.722 Per-NS Atomic Units: No 00:08:03.722 Maximum Single Source Range Length: 128 00:08:03.722 Maximum Copy Length: 128 00:08:03.722 Maximum Source Range Count: 128 00:08:03.722 NGUID/EUI64 Never Reused: No 00:08:03.722 Namespace Write Protected: No 00:08:03.722 Endurance group ID: 1 00:08:03.722 Number of LBA Formats: 8 00:08:03.722 Current LBA Format: LBA Format #04 00:08:03.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.722 00:08:03.722 Get Feature FDP: 00:08:03.722 ================ 00:08:03.722 Enabled: Yes 00:08:03.722 FDP configuration index: 0 00:08:03.722 00:08:03.722 FDP configurations log page 00:08:03.722 =========================== 00:08:03.722 Number of FDP configurations: 1 00:08:03.722 Version: 0 00:08:03.722 Size: 112 00:08:03.722 FDP Configuration Descriptor: 0 00:08:03.722 Descriptor Size: 96 00:08:03.722 Reclaim Group Identifier format: 2 00:08:03.722 FDP Volatile Write Cache: Not Present 00:08:03.722 FDP Configuration: Valid 00:08:03.722 Vendor Specific Size: 0 00:08:03.722 Number of Reclaim Groups: 2 00:08:03.722 Number of Recalim Unit Handles: 8 00:08:03.722 Max Placement Identifiers: 128 00:08:03.722 Number of Namespaces Suppprted: 256 00:08:03.722 Reclaim unit Nominal Size: 6000000 bytes 00:08:03.722 Estimated Reclaim Unit Time Limit: Not Reported 00:08:03.722 RUH Desc #000: RUH Type: Initially Isolated 00:08:03.722 RUH Desc #001: RUH Type: Initially Isolated 00:08:03.722 RUH Desc #002: RUH Type: Initially Isolated 00:08:03.722 RUH Desc #003: RUH Type: Initially Isolated 00:08:03.722 RUH Desc #004: RUH Type: Initially Isolated 00:08:03.722 RUH Desc #005: RUH Type: Initially Isolated 00:08:03.722 RUH Desc #006: RUH Type: Initially Isolated 00:08:03.722 RUH Desc #007: RUH Type: Initially Isolated 00:08:03.722 00:08:03.722 FDP reclaim unit handle usage log page 00:08:03.722 ====================================== 00:08:03.722 Number of Reclaim Unit Handles: 8 00:08:03.722 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:03.722 RUH Usage Desc #001: RUH Attributes: Unused 00:08:03.722 RUH Usage Desc #002: RUH Attributes: Unused 00:08:03.722 RUH Usage Desc #003: RUH Attributes: Unused 00:08:03.722 RUH Usage Desc #004: RUH Attributes: Unused 00:08:03.722 RUH Usage Desc #005: RUH Attributes: Unused 00:08:03.722 RUH Usage Desc #006: RUH Attributes: Unused 00:08:03.722 RUH Usage Desc #007: RUH Attributes: Unused 00:08:03.722 00:08:03.722 FDP statistics log page 00:08:03.722 ======================= 00:08:03.722 Host bytes with metadata written: 560111616 00:08:03.722 Media bytes with metadata written: 560189440 00:08:03.722 Media bytes erased: 0 00:08:03.722 00:08:03.722 FDP events log page 00:08:03.722 =================== 00:08:03.722 Number of FDP events: 0 00:08:03.722 00:08:03.722 NVM Specific Namespace Data 00:08:03.722 =========================== 00:08:03.722 Logical Block Storage Tag Mask: 0 00:08:03.722 Protection Information Capabilities: 00:08:03.722 16b Guard Protection Information Storage Tag Support: No 00:08:03.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.722 Storage Tag Check Read Support: No 00:08:03.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.722 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.722 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.723 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.723 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.723 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.723 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.981 ************************************ 00:08:03.981 END TEST nvme_identify 00:08:03.981 ************************************ 00:08:03.981 00:08:03.981 real 0m1.061s 00:08:03.981 user 0m0.407s 00:08:03.981 sys 0m0.465s 00:08:03.981 23:41:51 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.981 23:41:51 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:03.981 23:41:51 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:03.981 23:41:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.981 23:41:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.981 23:41:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.981 ************************************ 00:08:03.981 START TEST nvme_perf 00:08:03.981 ************************************ 00:08:03.981 23:41:51 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:03.981 23:41:51 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:04.935 Initializing NVMe Controllers 00:08:04.935 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:04.935 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:04.935 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:04.935 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:04.935 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:04.935 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:04.935 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:04.935 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:04.935 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:04.935 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:04.935 Initialization complete. Launching workers. 00:08:04.935 ======================================================== 00:08:04.935 Latency(us) 00:08:04.935 Device Information : IOPS MiB/s Average min max 00:08:04.935 PCIE (0000:00:13.0) NSID 1 from core 0: 13397.28 157.00 9558.83 6883.27 27181.32 00:08:04.935 PCIE (0000:00:11.0) NSID 1 from core 0: 13397.28 157.00 9552.17 6425.62 26466.35 00:08:04.935 PCIE (0000:00:10.0) NSID 1 from core 0: 13397.28 157.00 9543.07 5826.72 25980.00 00:08:04.935 PCIE (0000:00:12.0) NSID 1 from core 0: 13397.28 157.00 9535.73 5444.95 25329.31 00:08:04.935 PCIE (0000:00:12.0) NSID 2 from core 0: 13397.28 157.00 9527.58 4314.45 25272.92 00:08:04.935 PCIE (0000:00:12.0) NSID 3 from core 0: 13397.28 157.00 9519.54 3891.26 24873.37 00:08:04.935 ======================================================== 00:08:04.935 Total : 80383.66 942.00 9539.49 3891.26 27181.32 00:08:04.935 00:08:04.935 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.935 ================================================================================= 00:08:04.935 1.00000% : 8166.794us 00:08:04.935 10.00000% : 8620.505us 00:08:04.935 25.00000% : 8872.566us 00:08:04.935 50.00000% : 9225.452us 00:08:04.935 75.00000% : 9628.751us 00:08:04.935 90.00000% : 10838.646us 00:08:04.935 95.00000% : 11947.717us 00:08:04.935 98.00000% : 14821.218us 00:08:04.935 99.00000% : 16938.535us 00:08:04.935 99.50000% : 18249.255us 00:08:04.935 99.90000% : 27020.997us 00:08:04.935 99.99000% : 27222.646us 00:08:04.935 99.99900% : 27222.646us 00:08:04.935 99.99990% : 27222.646us 00:08:04.935 99.99999% : 27222.646us 00:08:04.935 00:08:04.935 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.935 ================================================================================= 00:08:04.935 1.00000% : 8166.794us 00:08:04.935 10.00000% : 8620.505us 00:08:04.935 25.00000% : 8872.566us 00:08:04.935 50.00000% : 9225.452us 00:08:04.935 75.00000% : 9628.751us 00:08:04.935 90.00000% : 10838.646us 00:08:04.935 95.00000% : 11846.892us 00:08:04.935 98.00000% : 15022.868us 00:08:04.935 99.00000% : 16636.062us 00:08:04.935 99.50000% : 18854.203us 00:08:04.935 99.90000% : 26416.049us 00:08:04.935 99.99000% : 26617.698us 00:08:04.935 99.99900% : 26617.698us 00:08:04.935 99.99990% : 26617.698us 00:08:04.935 99.99999% : 26617.698us 00:08:04.935 00:08:04.935 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.935 ================================================================================= 00:08:04.935 1.00000% : 8065.969us 00:08:04.935 10.00000% : 8570.092us 00:08:04.935 25.00000% : 8872.566us 00:08:04.935 50.00000% : 9225.452us 00:08:04.935 75.00000% : 9679.163us 00:08:04.935 90.00000% : 10687.409us 00:08:04.935 95.00000% : 11796.480us 00:08:04.935 98.00000% : 14619.569us 00:08:04.935 99.00000% : 16938.535us 00:08:04.935 99.50000% : 19459.151us 00:08:04.935 99.90000% : 25811.102us 00:08:04.935 99.99000% : 26012.751us 00:08:04.935 99.99900% : 26012.751us 00:08:04.935 99.99990% : 26012.751us 00:08:04.935 99.99999% : 26012.751us 00:08:04.935 00:08:04.935 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.935 ================================================================================= 00:08:04.935 1.00000% : 8166.794us 00:08:04.935 10.00000% : 8620.505us 00:08:04.935 25.00000% : 8872.566us 00:08:04.935 50.00000% : 9225.452us 00:08:04.935 75.00000% : 9628.751us 00:08:04.935 90.00000% : 10636.997us 00:08:04.935 95.00000% : 11746.068us 00:08:04.935 98.00000% : 14014.622us 00:08:04.935 99.00000% : 16736.886us 00:08:04.935 99.50000% : 19761.625us 00:08:04.935 99.90000% : 25206.154us 00:08:04.935 99.99000% : 25407.803us 00:08:04.936 99.99900% : 25407.803us 00:08:04.936 99.99990% : 25407.803us 00:08:04.936 99.99999% : 25407.803us 00:08:04.936 00:08:04.936 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.936 ================================================================================= 00:08:04.936 1.00000% : 8166.794us 00:08:04.936 10.00000% : 8620.505us 00:08:04.936 25.00000% : 8872.566us 00:08:04.936 50.00000% : 9175.040us 00:08:04.936 75.00000% : 9628.751us 00:08:04.936 90.00000% : 10586.585us 00:08:04.936 95.00000% : 11897.305us 00:08:05.196 98.00000% : 14417.920us 00:08:05.196 99.00000% : 17845.957us 00:08:05.196 99.50000% : 20064.098us 00:08:05.196 99.90000% : 25105.329us 00:08:05.196 99.99000% : 25306.978us 00:08:05.196 99.99900% : 25306.978us 00:08:05.196 99.99990% : 25306.978us 00:08:05.196 99.99999% : 25306.978us 00:08:05.196 00:08:05.196 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:05.196 ================================================================================= 00:08:05.196 1.00000% : 7965.145us 00:08:05.196 10.00000% : 8620.505us 00:08:05.196 25.00000% : 8872.566us 00:08:05.196 50.00000% : 9225.452us 00:08:05.196 75.00000% : 9578.338us 00:08:05.196 90.00000% : 10636.997us 00:08:05.196 95.00000% : 11796.480us 00:08:05.196 98.00000% : 14619.569us 00:08:05.196 99.00000% : 17946.782us 00:08:05.196 99.50000% : 20265.748us 00:08:05.196 99.90000% : 24702.031us 00:08:05.196 99.99000% : 24903.680us 00:08:05.196 99.99900% : 24903.680us 00:08:05.196 99.99990% : 24903.680us 00:08:05.196 99.99999% : 24903.680us 00:08:05.196 00:08:05.196 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:05.196 ============================================================================== 00:08:05.196 Range in us Cumulative IO count 00:08:05.196 6856.074 - 6906.486: 0.0223% ( 3) 00:08:05.196 6906.486 - 6956.898: 0.0446% ( 3) 00:08:05.196 6956.898 - 7007.311: 0.0818% ( 5) 00:08:05.196 7007.311 - 7057.723: 0.1116% ( 4) 00:08:05.196 7057.723 - 7108.135: 0.1488% ( 5) 00:08:05.196 7108.135 - 7158.548: 0.1711% ( 3) 00:08:05.196 7158.548 - 7208.960: 0.2009% ( 4) 00:08:05.196 7208.960 - 7259.372: 0.2307% ( 4) 00:08:05.196 7259.372 - 7309.785: 0.2604% ( 4) 00:08:05.196 7309.785 - 7360.197: 0.2902% ( 4) 00:08:05.196 7360.197 - 7410.609: 0.3274% ( 5) 00:08:05.196 7410.609 - 7461.022: 0.3571% ( 4) 00:08:05.196 7461.022 - 7511.434: 0.3869% ( 4) 00:08:05.196 7511.434 - 7561.846: 0.4167% ( 4) 00:08:05.196 7561.846 - 7612.258: 0.4464% ( 4) 00:08:05.196 7612.258 - 7662.671: 0.4762% ( 4) 00:08:05.196 7713.083 - 7763.495: 0.4985% ( 3) 00:08:05.196 7763.495 - 7813.908: 0.5655% ( 9) 00:08:05.196 7813.908 - 7864.320: 0.6027% ( 5) 00:08:05.196 7864.320 - 7914.732: 0.6622% ( 8) 00:08:05.196 7914.732 - 7965.145: 0.7143% ( 7) 00:08:05.196 7965.145 - 8015.557: 0.7738% ( 8) 00:08:05.196 8015.557 - 8065.969: 0.8482% ( 10) 00:08:05.196 8065.969 - 8116.382: 0.9449% ( 13) 00:08:05.196 8116.382 - 8166.794: 1.0863% ( 19) 00:08:05.196 8166.794 - 8217.206: 1.2723% ( 25) 00:08:05.196 8217.206 - 8267.618: 1.6592% ( 52) 00:08:05.196 8267.618 - 8318.031: 2.1354% ( 64) 00:08:05.196 8318.031 - 8368.443: 2.7604% ( 84) 00:08:05.196 8368.443 - 8418.855: 3.8690% ( 149) 00:08:05.196 8418.855 - 8469.268: 5.2902% ( 191) 00:08:05.196 8469.268 - 8519.680: 6.9494% ( 223) 00:08:05.196 8519.680 - 8570.092: 8.9732% ( 272) 00:08:05.196 8570.092 - 8620.505: 11.4360% ( 331) 00:08:05.196 8620.505 - 8670.917: 14.1443% ( 364) 00:08:05.196 8670.917 - 8721.329: 16.9048% ( 371) 00:08:05.196 8721.329 - 8771.742: 20.0595% ( 424) 00:08:05.196 8771.742 - 8822.154: 23.4226% ( 452) 00:08:05.196 8822.154 - 8872.566: 26.9345% ( 472) 00:08:05.196 8872.566 - 8922.978: 30.4539% ( 473) 00:08:05.196 8922.978 - 8973.391: 33.9583% ( 471) 00:08:05.196 8973.391 - 9023.803: 37.6935% ( 502) 00:08:05.196 9023.803 - 9074.215: 41.4062% ( 499) 00:08:05.196 9074.215 - 9124.628: 45.2753% ( 520) 00:08:05.196 9124.628 - 9175.040: 49.3155% ( 543) 00:08:05.196 9175.040 - 9225.452: 53.2217% ( 525) 00:08:05.196 9225.452 - 9275.865: 57.1131% ( 523) 00:08:05.196 9275.865 - 9326.277: 60.7515% ( 489) 00:08:05.196 9326.277 - 9376.689: 64.1815% ( 461) 00:08:05.196 9376.689 - 9427.102: 67.3140% ( 421) 00:08:05.196 9427.102 - 9477.514: 70.0744% ( 371) 00:08:05.196 9477.514 - 9527.926: 72.6265% ( 343) 00:08:05.196 9527.926 - 9578.338: 74.7321% ( 283) 00:08:05.196 9578.338 - 9628.751: 76.6369% ( 256) 00:08:05.196 9628.751 - 9679.163: 78.3631% ( 232) 00:08:05.196 9679.163 - 9729.575: 79.6652% ( 175) 00:08:05.196 9729.575 - 9779.988: 80.8557% ( 160) 00:08:05.196 9779.988 - 9830.400: 82.0610% ( 162) 00:08:05.196 9830.400 - 9880.812: 83.1176% ( 142) 00:08:05.196 9880.812 - 9931.225: 84.0997% ( 132) 00:08:05.196 9931.225 - 9981.637: 84.9405% ( 113) 00:08:05.196 9981.637 - 10032.049: 85.5357% ( 80) 00:08:05.196 10032.049 - 10082.462: 86.1458% ( 82) 00:08:05.196 10082.462 - 10132.874: 86.6146% ( 63) 00:08:05.196 10132.874 - 10183.286: 87.0015% ( 52) 00:08:05.196 10183.286 - 10233.698: 87.2917% ( 39) 00:08:05.196 10233.698 - 10284.111: 87.5446% ( 34) 00:08:05.196 10284.111 - 10334.523: 87.8199% ( 37) 00:08:05.196 10334.523 - 10384.935: 88.1473% ( 44) 00:08:05.196 10384.935 - 10435.348: 88.3854% ( 32) 00:08:05.196 10435.348 - 10485.760: 88.6086% ( 30) 00:08:05.196 10485.760 - 10536.172: 88.8318% ( 30) 00:08:05.196 10536.172 - 10586.585: 89.0253% ( 26) 00:08:05.196 10586.585 - 10636.997: 89.2188% ( 26) 00:08:05.196 10636.997 - 10687.409: 89.4420% ( 30) 00:08:05.196 10687.409 - 10737.822: 89.6875% ( 33) 00:08:05.196 10737.822 - 10788.234: 89.9479% ( 35) 00:08:05.196 10788.234 - 10838.646: 90.1711% ( 30) 00:08:05.196 10838.646 - 10889.058: 90.3869% ( 29) 00:08:05.196 10889.058 - 10939.471: 90.5952% ( 28) 00:08:05.196 10939.471 - 10989.883: 90.8185% ( 30) 00:08:05.196 10989.883 - 11040.295: 91.0342% ( 29) 00:08:05.196 11040.295 - 11090.708: 91.3318% ( 40) 00:08:05.196 11090.708 - 11141.120: 91.5402% ( 28) 00:08:05.196 11141.120 - 11191.532: 91.7411% ( 27) 00:08:05.196 11191.532 - 11241.945: 92.0164% ( 37) 00:08:05.196 11241.945 - 11292.357: 92.2247% ( 28) 00:08:05.196 11292.357 - 11342.769: 92.4256% ( 27) 00:08:05.196 11342.769 - 11393.182: 92.6339% ( 28) 00:08:05.196 11393.182 - 11443.594: 92.8571% ( 30) 00:08:05.196 11443.594 - 11494.006: 93.0952% ( 32) 00:08:05.196 11494.006 - 11544.418: 93.3631% ( 36) 00:08:05.196 11544.418 - 11594.831: 93.6607% ( 40) 00:08:05.196 11594.831 - 11645.243: 93.9062% ( 33) 00:08:05.197 11645.243 - 11695.655: 94.1443% ( 32) 00:08:05.197 11695.655 - 11746.068: 94.3601% ( 29) 00:08:05.197 11746.068 - 11796.480: 94.5461% ( 25) 00:08:05.197 11796.480 - 11846.892: 94.7173% ( 23) 00:08:05.197 11846.892 - 11897.305: 94.8958% ( 24) 00:08:05.197 11897.305 - 11947.717: 95.0744% ( 24) 00:08:05.197 11947.717 - 11998.129: 95.2307% ( 21) 00:08:05.197 11998.129 - 12048.542: 95.3943% ( 22) 00:08:05.197 12048.542 - 12098.954: 95.5506% ( 21) 00:08:05.197 12098.954 - 12149.366: 95.7068% ( 21) 00:08:05.197 12149.366 - 12199.778: 95.8557% ( 20) 00:08:05.197 12199.778 - 12250.191: 95.9821% ( 17) 00:08:05.197 12250.191 - 12300.603: 96.1161% ( 18) 00:08:05.197 12300.603 - 12351.015: 96.2500% ( 18) 00:08:05.197 12351.015 - 12401.428: 96.3690% ( 16) 00:08:05.197 12401.428 - 12451.840: 96.4732% ( 14) 00:08:05.197 12451.840 - 12502.252: 96.5551% ( 11) 00:08:05.197 12502.252 - 12552.665: 96.6071% ( 7) 00:08:05.197 12552.665 - 12603.077: 96.6741% ( 9) 00:08:05.197 12603.077 - 12653.489: 96.7336% ( 8) 00:08:05.197 12653.489 - 12703.902: 96.7857% ( 7) 00:08:05.197 12703.902 - 12754.314: 96.8378% ( 7) 00:08:05.197 12754.314 - 12804.726: 96.9048% ( 9) 00:08:05.197 12804.726 - 12855.138: 96.9568% ( 7) 00:08:05.197 12855.138 - 12905.551: 97.0164% ( 8) 00:08:05.197 12905.551 - 13006.375: 97.1354% ( 16) 00:08:05.197 13006.375 - 13107.200: 97.2396% ( 14) 00:08:05.197 13107.200 - 13208.025: 97.3289% ( 12) 00:08:05.197 13208.025 - 13308.849: 97.3884% ( 8) 00:08:05.197 13308.849 - 13409.674: 97.4405% ( 7) 00:08:05.197 13409.674 - 13510.498: 97.4926% ( 7) 00:08:05.197 13510.498 - 13611.323: 97.5521% ( 8) 00:08:05.197 13611.323 - 13712.148: 97.5893% ( 5) 00:08:05.197 13712.148 - 13812.972: 97.6190% ( 4) 00:08:05.197 14115.446 - 14216.271: 97.6562% ( 5) 00:08:05.197 14216.271 - 14317.095: 97.6935% ( 5) 00:08:05.197 14317.095 - 14417.920: 97.7604% ( 9) 00:08:05.197 14417.920 - 14518.745: 97.8348% ( 10) 00:08:05.197 14518.745 - 14619.569: 97.8943% ( 8) 00:08:05.197 14619.569 - 14720.394: 97.9539% ( 8) 00:08:05.197 14720.394 - 14821.218: 98.0208% ( 9) 00:08:05.197 14821.218 - 14922.043: 98.0952% ( 10) 00:08:05.197 14922.043 - 15022.868: 98.1771% ( 11) 00:08:05.197 15022.868 - 15123.692: 98.2440% ( 9) 00:08:05.197 15123.692 - 15224.517: 98.3259% ( 11) 00:08:05.197 15224.517 - 15325.342: 98.4003% ( 10) 00:08:05.197 15325.342 - 15426.166: 98.4524% ( 7) 00:08:05.197 15426.166 - 15526.991: 98.4821% ( 4) 00:08:05.197 15526.991 - 15627.815: 98.5045% ( 3) 00:08:05.197 15627.815 - 15728.640: 98.5714% ( 9) 00:08:05.197 15728.640 - 15829.465: 98.6235% ( 7) 00:08:05.197 15829.465 - 15930.289: 98.6607% ( 5) 00:08:05.197 15930.289 - 16031.114: 98.6905% ( 4) 00:08:05.197 16031.114 - 16131.938: 98.7128% ( 3) 00:08:05.197 16131.938 - 16232.763: 98.7351% ( 3) 00:08:05.197 16232.763 - 16333.588: 98.7649% ( 4) 00:08:05.197 16333.588 - 16434.412: 98.8021% ( 5) 00:08:05.197 16434.412 - 16535.237: 98.8393% ( 5) 00:08:05.197 16535.237 - 16636.062: 98.8839% ( 6) 00:08:05.197 16636.062 - 16736.886: 98.9435% ( 8) 00:08:05.197 16736.886 - 16837.711: 98.9881% ( 6) 00:08:05.197 16837.711 - 16938.535: 99.0402% ( 7) 00:08:05.197 16938.535 - 17039.360: 99.0923% ( 7) 00:08:05.197 17039.360 - 17140.185: 99.1443% ( 7) 00:08:05.197 17140.185 - 17241.009: 99.1815% ( 5) 00:08:05.197 17241.009 - 17341.834: 99.2336% ( 7) 00:08:05.197 17341.834 - 17442.658: 99.2783% ( 6) 00:08:05.197 17442.658 - 17543.483: 99.3229% ( 6) 00:08:05.197 17543.483 - 17644.308: 99.3452% ( 3) 00:08:05.197 17644.308 - 17745.132: 99.3750% ( 4) 00:08:05.197 17745.132 - 17845.957: 99.3973% ( 3) 00:08:05.197 17845.957 - 17946.782: 99.4196% ( 3) 00:08:05.197 17946.782 - 18047.606: 99.4494% ( 4) 00:08:05.197 18047.606 - 18148.431: 99.4717% ( 3) 00:08:05.197 18148.431 - 18249.255: 99.5015% ( 4) 00:08:05.197 18249.255 - 18350.080: 99.5238% ( 3) 00:08:05.197 25811.102 - 26012.751: 99.5461% ( 3) 00:08:05.197 26012.751 - 26214.400: 99.6429% ( 13) 00:08:05.197 26214.400 - 26416.049: 99.6652% ( 3) 00:08:05.197 26416.049 - 26617.698: 99.7396% ( 10) 00:08:05.197 26617.698 - 26819.348: 99.8289% ( 12) 00:08:05.197 26819.348 - 27020.997: 99.9256% ( 13) 00:08:05.197 27020.997 - 27222.646: 100.0000% ( 10) 00:08:05.197 00:08:05.197 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:05.197 ============================================================================== 00:08:05.197 Range in us Cumulative IO count 00:08:05.197 6402.363 - 6427.569: 0.0074% ( 1) 00:08:05.197 6427.569 - 6452.775: 0.0223% ( 2) 00:08:05.197 6452.775 - 6503.188: 0.0446% ( 3) 00:08:05.197 6503.188 - 6553.600: 0.0818% ( 5) 00:08:05.197 6553.600 - 6604.012: 0.1116% ( 4) 00:08:05.197 6604.012 - 6654.425: 0.1488% ( 5) 00:08:05.197 6654.425 - 6704.837: 0.1786% ( 4) 00:08:05.197 6704.837 - 6755.249: 0.2083% ( 4) 00:08:05.197 6755.249 - 6805.662: 0.2455% ( 5) 00:08:05.197 6805.662 - 6856.074: 0.2753% ( 4) 00:08:05.197 6856.074 - 6906.486: 0.3051% ( 4) 00:08:05.197 6906.486 - 6956.898: 0.3423% ( 5) 00:08:05.197 6956.898 - 7007.311: 0.3720% ( 4) 00:08:05.197 7007.311 - 7057.723: 0.4018% ( 4) 00:08:05.197 7057.723 - 7108.135: 0.4390% ( 5) 00:08:05.197 7108.135 - 7158.548: 0.4688% ( 4) 00:08:05.197 7158.548 - 7208.960: 0.4762% ( 1) 00:08:05.197 7763.495 - 7813.908: 0.4836% ( 1) 00:08:05.197 7813.908 - 7864.320: 0.5357% ( 7) 00:08:05.197 7864.320 - 7914.732: 0.5506% ( 2) 00:08:05.197 7914.732 - 7965.145: 0.5804% ( 4) 00:08:05.197 7965.145 - 8015.557: 0.6399% ( 8) 00:08:05.197 8015.557 - 8065.969: 0.7961% ( 21) 00:08:05.197 8065.969 - 8116.382: 0.9449% ( 20) 00:08:05.197 8116.382 - 8166.794: 1.0714% ( 17) 00:08:05.197 8166.794 - 8217.206: 1.2351% ( 22) 00:08:05.197 8217.206 - 8267.618: 1.6071% ( 50) 00:08:05.197 8267.618 - 8318.031: 1.9196% ( 42) 00:08:05.197 8318.031 - 8368.443: 2.4554% ( 72) 00:08:05.197 8368.443 - 8418.855: 3.1771% ( 97) 00:08:05.197 8418.855 - 8469.268: 4.5164% ( 180) 00:08:05.197 8469.268 - 8519.680: 6.3542% ( 247) 00:08:05.197 8519.680 - 8570.092: 8.4226% ( 278) 00:08:05.197 8570.092 - 8620.505: 10.8557% ( 327) 00:08:05.197 8620.505 - 8670.917: 13.5193% ( 358) 00:08:05.197 8670.917 - 8721.329: 16.3170% ( 376) 00:08:05.197 8721.329 - 8771.742: 19.6131% ( 443) 00:08:05.197 8771.742 - 8822.154: 23.0134% ( 457) 00:08:05.197 8822.154 - 8872.566: 26.3988% ( 455) 00:08:05.197 8872.566 - 8922.978: 29.9107% ( 472) 00:08:05.197 8922.978 - 8973.391: 33.5640% ( 491) 00:08:05.197 8973.391 - 9023.803: 37.1577% ( 483) 00:08:05.197 9023.803 - 9074.215: 41.2054% ( 544) 00:08:05.197 9074.215 - 9124.628: 45.2232% ( 540) 00:08:05.197 9124.628 - 9175.040: 49.2188% ( 537) 00:08:05.197 9175.040 - 9225.452: 53.1920% ( 534) 00:08:05.197 9225.452 - 9275.865: 57.0982% ( 525) 00:08:05.197 9275.865 - 9326.277: 60.9747% ( 521) 00:08:05.197 9326.277 - 9376.689: 64.6577% ( 495) 00:08:05.197 9376.689 - 9427.102: 67.7009% ( 409) 00:08:05.198 9427.102 - 9477.514: 70.3795% ( 360) 00:08:05.198 9477.514 - 9527.926: 72.7009% ( 312) 00:08:05.198 9527.926 - 9578.338: 74.7842% ( 280) 00:08:05.198 9578.338 - 9628.751: 76.7411% ( 263) 00:08:05.198 9628.751 - 9679.163: 78.3259% ( 213) 00:08:05.198 9679.163 - 9729.575: 79.8438% ( 204) 00:08:05.198 9729.575 - 9779.988: 81.2277% ( 186) 00:08:05.198 9779.988 - 9830.400: 82.3884% ( 156) 00:08:05.198 9830.400 - 9880.812: 83.4524% ( 143) 00:08:05.198 9880.812 - 9931.225: 84.3676% ( 123) 00:08:05.198 9931.225 - 9981.637: 85.0818% ( 96) 00:08:05.198 9981.637 - 10032.049: 85.7292% ( 87) 00:08:05.198 10032.049 - 10082.462: 86.2649% ( 72) 00:08:05.198 10082.462 - 10132.874: 86.6964% ( 58) 00:08:05.198 10132.874 - 10183.286: 87.0833% ( 52) 00:08:05.198 10183.286 - 10233.698: 87.4033% ( 43) 00:08:05.198 10233.698 - 10284.111: 87.6562% ( 34) 00:08:05.198 10284.111 - 10334.523: 87.8571% ( 27) 00:08:05.198 10334.523 - 10384.935: 88.1101% ( 34) 00:08:05.198 10384.935 - 10435.348: 88.3631% ( 34) 00:08:05.198 10435.348 - 10485.760: 88.5938% ( 31) 00:08:05.198 10485.760 - 10536.172: 88.7798% ( 25) 00:08:05.198 10536.172 - 10586.585: 88.9583% ( 24) 00:08:05.198 10586.585 - 10636.997: 89.1667% ( 28) 00:08:05.198 10636.997 - 10687.409: 89.4122% ( 33) 00:08:05.198 10687.409 - 10737.822: 89.7024% ( 39) 00:08:05.198 10737.822 - 10788.234: 89.9628% ( 35) 00:08:05.198 10788.234 - 10838.646: 90.1860% ( 30) 00:08:05.198 10838.646 - 10889.058: 90.4390% ( 34) 00:08:05.198 10889.058 - 10939.471: 90.7664% ( 44) 00:08:05.198 10939.471 - 10989.883: 91.1310% ( 49) 00:08:05.198 10989.883 - 11040.295: 91.3914% ( 35) 00:08:05.198 11040.295 - 11090.708: 91.6667% ( 37) 00:08:05.198 11090.708 - 11141.120: 91.9717% ( 41) 00:08:05.198 11141.120 - 11191.532: 92.2396% ( 36) 00:08:05.198 11191.532 - 11241.945: 92.5149% ( 37) 00:08:05.198 11241.945 - 11292.357: 92.7753% ( 35) 00:08:05.198 11292.357 - 11342.769: 93.0208% ( 33) 00:08:05.198 11342.769 - 11393.182: 93.2366% ( 29) 00:08:05.198 11393.182 - 11443.594: 93.4747% ( 32) 00:08:05.198 11443.594 - 11494.006: 93.7054% ( 31) 00:08:05.198 11494.006 - 11544.418: 93.9137% ( 28) 00:08:05.198 11544.418 - 11594.831: 94.1295% ( 29) 00:08:05.198 11594.831 - 11645.243: 94.3229% ( 26) 00:08:05.198 11645.243 - 11695.655: 94.5164% ( 26) 00:08:05.198 11695.655 - 11746.068: 94.6875% ( 23) 00:08:05.198 11746.068 - 11796.480: 94.8810% ( 26) 00:08:05.198 11796.480 - 11846.892: 95.0223% ( 19) 00:08:05.198 11846.892 - 11897.305: 95.1488% ( 17) 00:08:05.198 11897.305 - 11947.717: 95.2530% ( 14) 00:08:05.198 11947.717 - 11998.129: 95.4241% ( 23) 00:08:05.198 11998.129 - 12048.542: 95.5804% ( 21) 00:08:05.198 12048.542 - 12098.954: 95.6920% ( 15) 00:08:05.198 12098.954 - 12149.366: 95.8259% ( 18) 00:08:05.198 12149.366 - 12199.778: 95.9747% ( 20) 00:08:05.198 12199.778 - 12250.191: 96.0863% ( 15) 00:08:05.198 12250.191 - 12300.603: 96.1979% ( 15) 00:08:05.198 12300.603 - 12351.015: 96.3690% ( 23) 00:08:05.198 12351.015 - 12401.428: 96.4807% ( 15) 00:08:05.198 12401.428 - 12451.840: 96.5774% ( 13) 00:08:05.198 12451.840 - 12502.252: 96.6592% ( 11) 00:08:05.198 12502.252 - 12552.665: 96.7560% ( 13) 00:08:05.198 12552.665 - 12603.077: 96.8452% ( 12) 00:08:05.198 12603.077 - 12653.489: 96.9048% ( 8) 00:08:05.198 12653.489 - 12703.902: 96.9792% ( 10) 00:08:05.198 12703.902 - 12754.314: 97.0238% ( 6) 00:08:05.198 12754.314 - 12804.726: 97.0759% ( 7) 00:08:05.198 12804.726 - 12855.138: 97.1280% ( 7) 00:08:05.198 12855.138 - 12905.551: 97.1801% ( 7) 00:08:05.198 12905.551 - 13006.375: 97.2842% ( 14) 00:08:05.198 13006.375 - 13107.200: 97.3661% ( 11) 00:08:05.198 13107.200 - 13208.025: 97.4182% ( 7) 00:08:05.198 13208.025 - 13308.849: 97.4777% ( 8) 00:08:05.198 13308.849 - 13409.674: 97.5372% ( 8) 00:08:05.198 13409.674 - 13510.498: 97.5670% ( 4) 00:08:05.198 13510.498 - 13611.323: 97.6042% ( 5) 00:08:05.198 13611.323 - 13712.148: 97.6190% ( 2) 00:08:05.198 14317.095 - 14417.920: 97.6414% ( 3) 00:08:05.198 14417.920 - 14518.745: 97.6935% ( 7) 00:08:05.198 14518.745 - 14619.569: 97.7381% ( 6) 00:08:05.198 14619.569 - 14720.394: 97.7827% ( 6) 00:08:05.198 14720.394 - 14821.218: 97.8348% ( 7) 00:08:05.198 14821.218 - 14922.043: 97.9018% ( 9) 00:08:05.198 14922.043 - 15022.868: 98.0134% ( 15) 00:08:05.198 15022.868 - 15123.692: 98.1101% ( 13) 00:08:05.198 15123.692 - 15224.517: 98.2068% ( 13) 00:08:05.198 15224.517 - 15325.342: 98.3110% ( 14) 00:08:05.198 15325.342 - 15426.166: 98.3854% ( 10) 00:08:05.198 15426.166 - 15526.991: 98.4375% ( 7) 00:08:05.198 15526.991 - 15627.815: 98.4970% ( 8) 00:08:05.198 15627.815 - 15728.640: 98.5565% ( 8) 00:08:05.198 15728.640 - 15829.465: 98.6086% ( 7) 00:08:05.198 15829.465 - 15930.289: 98.6607% ( 7) 00:08:05.198 15930.289 - 16031.114: 98.6979% ( 5) 00:08:05.198 16031.114 - 16131.938: 98.7574% ( 8) 00:08:05.198 16131.938 - 16232.763: 98.8021% ( 6) 00:08:05.198 16232.763 - 16333.588: 98.8616% ( 8) 00:08:05.198 16333.588 - 16434.412: 98.9062% ( 6) 00:08:05.198 16434.412 - 16535.237: 98.9583% ( 7) 00:08:05.198 16535.237 - 16636.062: 99.0104% ( 7) 00:08:05.198 16636.062 - 16736.886: 99.0476% ( 5) 00:08:05.198 16938.535 - 17039.360: 99.0699% ( 3) 00:08:05.198 17039.360 - 17140.185: 99.0923% ( 3) 00:08:05.198 17140.185 - 17241.009: 99.1146% ( 3) 00:08:05.198 17241.009 - 17341.834: 99.1443% ( 4) 00:08:05.198 17341.834 - 17442.658: 99.1667% ( 3) 00:08:05.198 17442.658 - 17543.483: 99.1890% ( 3) 00:08:05.198 17543.483 - 17644.308: 99.2188% ( 4) 00:08:05.198 17644.308 - 17745.132: 99.2411% ( 3) 00:08:05.198 17745.132 - 17845.957: 99.2708% ( 4) 00:08:05.198 17845.957 - 17946.782: 99.2932% ( 3) 00:08:05.198 17946.782 - 18047.606: 99.3229% ( 4) 00:08:05.198 18047.606 - 18148.431: 99.3452% ( 3) 00:08:05.198 18148.431 - 18249.255: 99.3676% ( 3) 00:08:05.198 18249.255 - 18350.080: 99.3973% ( 4) 00:08:05.198 18350.080 - 18450.905: 99.4122% ( 2) 00:08:05.198 18450.905 - 18551.729: 99.4345% ( 3) 00:08:05.198 18551.729 - 18652.554: 99.4643% ( 4) 00:08:05.198 18652.554 - 18753.378: 99.4940% ( 4) 00:08:05.198 18753.378 - 18854.203: 99.5164% ( 3) 00:08:05.198 18854.203 - 18955.028: 99.5238% ( 1) 00:08:05.198 25306.978 - 25407.803: 99.5387% ( 2) 00:08:05.198 25407.803 - 25508.628: 99.5833% ( 6) 00:08:05.198 25508.628 - 25609.452: 99.6280% ( 6) 00:08:05.198 25609.452 - 25710.277: 99.6726% ( 6) 00:08:05.198 25710.277 - 25811.102: 99.7173% ( 6) 00:08:05.198 25811.102 - 26012.751: 99.7917% ( 10) 00:08:05.198 26012.751 - 26214.400: 99.8810% ( 12) 00:08:05.198 26214.400 - 26416.049: 99.9702% ( 12) 00:08:05.198 26416.049 - 26617.698: 100.0000% ( 4) 00:08:05.198 00:08:05.198 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:05.198 ============================================================================== 00:08:05.198 Range in us Cumulative IO count 00:08:05.198 5822.622 - 5847.828: 0.0298% ( 4) 00:08:05.198 5847.828 - 5873.034: 0.0446% ( 2) 00:08:05.198 5898.240 - 5923.446: 0.0595% ( 2) 00:08:05.199 5923.446 - 5948.652: 0.0818% ( 3) 00:08:05.199 5948.652 - 5973.858: 0.0967% ( 2) 00:08:05.199 5973.858 - 5999.065: 0.1042% ( 1) 00:08:05.199 5999.065 - 6024.271: 0.1265% ( 3) 00:08:05.199 6024.271 - 6049.477: 0.1339% ( 1) 00:08:05.199 6049.477 - 6074.683: 0.1414% ( 1) 00:08:05.199 6074.683 - 6099.889: 0.1637% ( 3) 00:08:05.199 6099.889 - 6125.095: 0.1711% ( 1) 00:08:05.199 6125.095 - 6150.302: 0.1935% ( 3) 00:08:05.199 6150.302 - 6175.508: 0.2009% ( 1) 00:08:05.199 6175.508 - 6200.714: 0.2083% ( 1) 00:08:05.199 6200.714 - 6225.920: 0.2307% ( 3) 00:08:05.199 6225.920 - 6251.126: 0.2381% ( 1) 00:08:05.199 6251.126 - 6276.332: 0.2530% ( 2) 00:08:05.199 6276.332 - 6301.538: 0.2604% ( 1) 00:08:05.199 6301.538 - 6326.745: 0.2753% ( 2) 00:08:05.199 6326.745 - 6351.951: 0.2902% ( 2) 00:08:05.199 6351.951 - 6377.157: 0.3051% ( 2) 00:08:05.199 6377.157 - 6402.363: 0.3125% ( 1) 00:08:05.199 6402.363 - 6427.569: 0.3348% ( 3) 00:08:05.199 6427.569 - 6452.775: 0.3423% ( 1) 00:08:05.199 6452.775 - 6503.188: 0.3795% ( 5) 00:08:05.199 6503.188 - 6553.600: 0.3869% ( 1) 00:08:05.199 6553.600 - 6604.012: 0.4315% ( 6) 00:08:05.199 6604.012 - 6654.425: 0.4613% ( 4) 00:08:05.199 6654.425 - 6704.837: 0.4762% ( 2) 00:08:05.199 7713.083 - 7763.495: 0.4985% ( 3) 00:08:05.199 7763.495 - 7813.908: 0.5357% ( 5) 00:08:05.199 7813.908 - 7864.320: 0.5804% ( 6) 00:08:05.199 7864.320 - 7914.732: 0.6696% ( 12) 00:08:05.199 7914.732 - 7965.145: 0.7217% ( 7) 00:08:05.199 7965.145 - 8015.557: 0.8185% ( 13) 00:08:05.199 8015.557 - 8065.969: 1.0417% ( 30) 00:08:05.199 8065.969 - 8116.382: 1.2202% ( 24) 00:08:05.199 8116.382 - 8166.794: 1.3616% ( 19) 00:08:05.199 8166.794 - 8217.206: 1.6295% ( 36) 00:08:05.199 8217.206 - 8267.618: 2.0685% ( 59) 00:08:05.199 8267.618 - 8318.031: 2.8125% ( 100) 00:08:05.199 8318.031 - 8368.443: 3.9062% ( 147) 00:08:05.199 8368.443 - 8418.855: 5.3497% ( 194) 00:08:05.199 8418.855 - 8469.268: 7.1205% ( 238) 00:08:05.199 8469.268 - 8519.680: 9.2485% ( 286) 00:08:05.199 8519.680 - 8570.092: 11.5030% ( 303) 00:08:05.199 8570.092 - 8620.505: 13.7351% ( 300) 00:08:05.199 8620.505 - 8670.917: 16.3170% ( 347) 00:08:05.199 8670.917 - 8721.329: 18.8616% ( 342) 00:08:05.199 8721.329 - 8771.742: 21.6443% ( 374) 00:08:05.199 8771.742 - 8822.154: 24.7098% ( 412) 00:08:05.199 8822.154 - 8872.566: 27.8423% ( 421) 00:08:05.199 8872.566 - 8922.978: 30.9821% ( 422) 00:08:05.199 8922.978 - 8973.391: 34.2188% ( 435) 00:08:05.199 8973.391 - 9023.803: 37.6562% ( 462) 00:08:05.199 9023.803 - 9074.215: 41.1161% ( 465) 00:08:05.199 9074.215 - 9124.628: 44.6503% ( 475) 00:08:05.199 9124.628 - 9175.040: 47.9167% ( 439) 00:08:05.199 9175.040 - 9225.452: 51.8452% ( 528) 00:08:05.199 9225.452 - 9275.865: 55.2083% ( 452) 00:08:05.199 9275.865 - 9326.277: 58.7723% ( 479) 00:08:05.199 9326.277 - 9376.689: 61.9494% ( 427) 00:08:05.199 9376.689 - 9427.102: 65.2455% ( 443) 00:08:05.199 9427.102 - 9477.514: 68.0878% ( 382) 00:08:05.199 9477.514 - 9527.926: 70.7292% ( 355) 00:08:05.199 9527.926 - 9578.338: 72.9241% ( 295) 00:08:05.199 9578.338 - 9628.751: 74.9479% ( 272) 00:08:05.199 9628.751 - 9679.163: 76.8452% ( 255) 00:08:05.199 9679.163 - 9729.575: 78.4449% ( 215) 00:08:05.199 9729.575 - 9779.988: 79.9182% ( 198) 00:08:05.199 9779.988 - 9830.400: 81.2277% ( 176) 00:08:05.199 9830.400 - 9880.812: 82.5595% ( 179) 00:08:05.199 9880.812 - 9931.225: 83.6310% ( 144) 00:08:05.199 9931.225 - 9981.637: 84.5461% ( 123) 00:08:05.199 9981.637 - 10032.049: 85.2232% ( 91) 00:08:05.199 10032.049 - 10082.462: 85.8036% ( 78) 00:08:05.199 10082.462 - 10132.874: 86.2798% ( 64) 00:08:05.199 10132.874 - 10183.286: 86.8378% ( 75) 00:08:05.199 10183.286 - 10233.698: 87.3438% ( 68) 00:08:05.199 10233.698 - 10284.111: 87.6414% ( 40) 00:08:05.199 10284.111 - 10334.523: 88.1101% ( 63) 00:08:05.199 10334.523 - 10384.935: 88.3929% ( 38) 00:08:05.199 10384.935 - 10435.348: 88.7351% ( 46) 00:08:05.199 10435.348 - 10485.760: 89.0179% ( 38) 00:08:05.199 10485.760 - 10536.172: 89.3824% ( 49) 00:08:05.199 10536.172 - 10586.585: 89.6577% ( 37) 00:08:05.199 10586.585 - 10636.997: 89.8958% ( 32) 00:08:05.199 10636.997 - 10687.409: 90.0521% ( 21) 00:08:05.199 10687.409 - 10737.822: 90.2530% ( 27) 00:08:05.199 10737.822 - 10788.234: 90.5655% ( 42) 00:08:05.199 10788.234 - 10838.646: 90.7812% ( 29) 00:08:05.199 10838.646 - 10889.058: 90.9524% ( 23) 00:08:05.199 10889.058 - 10939.471: 91.1533% ( 27) 00:08:05.199 10939.471 - 10989.883: 91.4286% ( 37) 00:08:05.199 10989.883 - 11040.295: 91.6518% ( 30) 00:08:05.199 11040.295 - 11090.708: 91.9122% ( 35) 00:08:05.199 11090.708 - 11141.120: 92.1577% ( 33) 00:08:05.199 11141.120 - 11191.532: 92.3958% ( 32) 00:08:05.199 11191.532 - 11241.945: 92.6860% ( 39) 00:08:05.199 11241.945 - 11292.357: 92.9167% ( 31) 00:08:05.199 11292.357 - 11342.769: 93.1771% ( 35) 00:08:05.199 11342.769 - 11393.182: 93.4226% ( 33) 00:08:05.199 11393.182 - 11443.594: 93.7054% ( 38) 00:08:05.199 11443.594 - 11494.006: 93.9732% ( 36) 00:08:05.199 11494.006 - 11544.418: 94.1815% ( 28) 00:08:05.199 11544.418 - 11594.831: 94.3750% ( 26) 00:08:05.199 11594.831 - 11645.243: 94.5164% ( 19) 00:08:05.199 11645.243 - 11695.655: 94.7098% ( 26) 00:08:05.199 11695.655 - 11746.068: 94.9107% ( 27) 00:08:05.199 11746.068 - 11796.480: 95.0372% ( 17) 00:08:05.199 11796.480 - 11846.892: 95.2307% ( 26) 00:08:05.199 11846.892 - 11897.305: 95.3497% ( 16) 00:08:05.199 11897.305 - 11947.717: 95.5432% ( 26) 00:08:05.199 11947.717 - 11998.129: 95.6696% ( 17) 00:08:05.199 11998.129 - 12048.542: 95.7887% ( 16) 00:08:05.199 12048.542 - 12098.954: 95.8705% ( 11) 00:08:05.199 12098.954 - 12149.366: 95.9821% ( 15) 00:08:05.199 12149.366 - 12199.778: 96.0491% ( 9) 00:08:05.199 12199.778 - 12250.191: 96.1161% ( 9) 00:08:05.199 12250.191 - 12300.603: 96.2128% ( 13) 00:08:05.199 12300.603 - 12351.015: 96.2500% ( 5) 00:08:05.199 12351.015 - 12401.428: 96.3318% ( 11) 00:08:05.199 12401.428 - 12451.840: 96.3914% ( 8) 00:08:05.199 12451.840 - 12502.252: 96.4583% ( 9) 00:08:05.199 12502.252 - 12552.665: 96.5327% ( 10) 00:08:05.199 12552.665 - 12603.077: 96.5997% ( 9) 00:08:05.199 12603.077 - 12653.489: 96.6592% ( 8) 00:08:05.199 12653.489 - 12703.902: 96.7262% ( 9) 00:08:05.199 12703.902 - 12754.314: 96.7485% ( 3) 00:08:05.199 12754.314 - 12804.726: 96.8006% ( 7) 00:08:05.199 12804.726 - 12855.138: 96.8750% ( 10) 00:08:05.199 12855.138 - 12905.551: 96.9196% ( 6) 00:08:05.199 12905.551 - 13006.375: 97.0238% ( 14) 00:08:05.199 13006.375 - 13107.200: 97.1652% ( 19) 00:08:05.199 13107.200 - 13208.025: 97.2173% ( 7) 00:08:05.199 13208.025 - 13308.849: 97.2842% ( 9) 00:08:05.199 13308.849 - 13409.674: 97.3586% ( 10) 00:08:05.199 13409.674 - 13510.498: 97.4107% ( 7) 00:08:05.199 13510.498 - 13611.323: 97.4777% ( 9) 00:08:05.199 13611.323 - 13712.148: 97.5149% ( 5) 00:08:05.199 13712.148 - 13812.972: 97.5446% ( 4) 00:08:05.199 13812.972 - 13913.797: 97.6116% ( 9) 00:08:05.199 13913.797 - 14014.622: 97.6935% ( 11) 00:08:05.199 14014.622 - 14115.446: 97.7679% ( 10) 00:08:05.199 14115.446 - 14216.271: 97.8274% ( 8) 00:08:05.199 14216.271 - 14317.095: 97.9092% ( 11) 00:08:05.199 14317.095 - 14417.920: 97.9390% ( 4) 00:08:05.199 14417.920 - 14518.745: 97.9985% ( 8) 00:08:05.199 14518.745 - 14619.569: 98.0655% ( 9) 00:08:05.199 14619.569 - 14720.394: 98.1250% ( 8) 00:08:05.199 14720.394 - 14821.218: 98.1696% ( 6) 00:08:05.199 14821.218 - 14922.043: 98.2515% ( 11) 00:08:05.199 14922.043 - 15022.868: 98.3036% ( 7) 00:08:05.200 15022.868 - 15123.692: 98.3408% ( 5) 00:08:05.200 15123.692 - 15224.517: 98.3631% ( 3) 00:08:05.200 15224.517 - 15325.342: 98.3780% ( 2) 00:08:05.200 15325.342 - 15426.166: 98.4003% ( 3) 00:08:05.200 15426.166 - 15526.991: 98.4152% ( 2) 00:08:05.200 15526.991 - 15627.815: 98.4747% ( 8) 00:08:05.200 15627.815 - 15728.640: 98.5193% ( 6) 00:08:05.200 15728.640 - 15829.465: 98.5565% ( 5) 00:08:05.200 15829.465 - 15930.289: 98.5863% ( 4) 00:08:05.200 15930.289 - 16031.114: 98.6384% ( 7) 00:08:05.200 16031.114 - 16131.938: 98.6607% ( 3) 00:08:05.200 16131.938 - 16232.763: 98.7202% ( 8) 00:08:05.200 16232.763 - 16333.588: 98.7649% ( 6) 00:08:05.200 16333.588 - 16434.412: 98.8170% ( 7) 00:08:05.200 16434.412 - 16535.237: 98.8542% ( 5) 00:08:05.200 16535.237 - 16636.062: 98.8988% ( 6) 00:08:05.200 16636.062 - 16736.886: 98.9435% ( 6) 00:08:05.200 16736.886 - 16837.711: 98.9658% ( 3) 00:08:05.200 16837.711 - 16938.535: 99.0327% ( 9) 00:08:05.200 16938.535 - 17039.360: 99.0476% ( 2) 00:08:05.200 17140.185 - 17241.009: 99.0551% ( 1) 00:08:05.200 17241.009 - 17341.834: 99.0774% ( 3) 00:08:05.200 17341.834 - 17442.658: 99.0997% ( 3) 00:08:05.200 17442.658 - 17543.483: 99.1146% ( 2) 00:08:05.200 17543.483 - 17644.308: 99.1443% ( 4) 00:08:05.200 17644.308 - 17745.132: 99.1592% ( 2) 00:08:05.200 17745.132 - 17845.957: 99.1815% ( 3) 00:08:05.200 17845.957 - 17946.782: 99.2039% ( 3) 00:08:05.200 17946.782 - 18047.606: 99.2113% ( 1) 00:08:05.200 18047.606 - 18148.431: 99.2336% ( 3) 00:08:05.200 18148.431 - 18249.255: 99.2560% ( 3) 00:08:05.200 18249.255 - 18350.080: 99.2932% ( 5) 00:08:05.200 18350.080 - 18450.905: 99.3080% ( 2) 00:08:05.200 18450.905 - 18551.729: 99.3229% ( 2) 00:08:05.200 18551.729 - 18652.554: 99.3452% ( 3) 00:08:05.200 18652.554 - 18753.378: 99.3676% ( 3) 00:08:05.200 18753.378 - 18854.203: 99.3824% ( 2) 00:08:05.200 18854.203 - 18955.028: 99.4122% ( 4) 00:08:05.200 18955.028 - 19055.852: 99.4271% ( 2) 00:08:05.200 19055.852 - 19156.677: 99.4494% ( 3) 00:08:05.200 19156.677 - 19257.502: 99.4717% ( 3) 00:08:05.200 19257.502 - 19358.326: 99.4940% ( 3) 00:08:05.200 19358.326 - 19459.151: 99.5164% ( 3) 00:08:05.200 19459.151 - 19559.975: 99.5238% ( 1) 00:08:05.200 24802.855 - 24903.680: 99.5610% ( 5) 00:08:05.200 24903.680 - 25004.505: 99.6057% ( 6) 00:08:05.200 25004.505 - 25105.329: 99.6429% ( 5) 00:08:05.200 25105.329 - 25206.154: 99.6652% ( 3) 00:08:05.200 25206.154 - 25306.978: 99.7396% ( 10) 00:08:05.200 25306.978 - 25407.803: 99.7693% ( 4) 00:08:05.200 25407.803 - 25508.628: 99.8065% ( 5) 00:08:05.200 25508.628 - 25609.452: 99.8438% ( 5) 00:08:05.200 25609.452 - 25710.277: 99.8884% ( 6) 00:08:05.200 25710.277 - 25811.102: 99.9330% ( 6) 00:08:05.200 25811.102 - 26012.751: 100.0000% ( 9) 00:08:05.200 00:08:05.200 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:05.200 ============================================================================== 00:08:05.200 Range in us Cumulative IO count 00:08:05.200 5444.529 - 5469.735: 0.0521% ( 7) 00:08:05.200 5469.735 - 5494.942: 0.0595% ( 1) 00:08:05.200 5494.942 - 5520.148: 0.0744% ( 2) 00:08:05.200 5520.148 - 5545.354: 0.0818% ( 1) 00:08:05.200 5545.354 - 5570.560: 0.1042% ( 3) 00:08:05.200 5570.560 - 5595.766: 0.1190% ( 2) 00:08:05.200 5595.766 - 5620.972: 0.1265% ( 1) 00:08:05.200 5620.972 - 5646.178: 0.1414% ( 2) 00:08:05.200 5646.178 - 5671.385: 0.1562% ( 2) 00:08:05.200 5671.385 - 5696.591: 0.1711% ( 2) 00:08:05.200 5696.591 - 5721.797: 0.1860% ( 2) 00:08:05.200 5721.797 - 5747.003: 0.2009% ( 2) 00:08:05.200 5747.003 - 5772.209: 0.2158% ( 2) 00:08:05.200 5772.209 - 5797.415: 0.2307% ( 2) 00:08:05.200 5797.415 - 5822.622: 0.2455% ( 2) 00:08:05.200 5822.622 - 5847.828: 0.2679% ( 3) 00:08:05.200 5847.828 - 5873.034: 0.2827% ( 2) 00:08:05.200 5873.034 - 5898.240: 0.2976% ( 2) 00:08:05.200 5898.240 - 5923.446: 0.3125% ( 2) 00:08:05.200 5923.446 - 5948.652: 0.3274% ( 2) 00:08:05.200 5948.652 - 5973.858: 0.3423% ( 2) 00:08:05.200 5973.858 - 5999.065: 0.3571% ( 2) 00:08:05.200 5999.065 - 6024.271: 0.3720% ( 2) 00:08:05.200 6024.271 - 6049.477: 0.3869% ( 2) 00:08:05.200 6049.477 - 6074.683: 0.4018% ( 2) 00:08:05.200 6074.683 - 6099.889: 0.4167% ( 2) 00:08:05.200 6099.889 - 6125.095: 0.4315% ( 2) 00:08:05.200 6125.095 - 6150.302: 0.4464% ( 2) 00:08:05.200 6150.302 - 6175.508: 0.4613% ( 2) 00:08:05.200 6175.508 - 6200.714: 0.4762% ( 2) 00:08:05.200 7763.495 - 7813.908: 0.4911% ( 2) 00:08:05.200 7813.908 - 7864.320: 0.5134% ( 3) 00:08:05.200 7864.320 - 7914.732: 0.5655% ( 7) 00:08:05.200 7914.732 - 7965.145: 0.6473% ( 11) 00:08:05.200 7965.145 - 8015.557: 0.6994% ( 7) 00:08:05.200 8015.557 - 8065.969: 0.8036% ( 14) 00:08:05.200 8065.969 - 8116.382: 0.9003% ( 13) 00:08:05.200 8116.382 - 8166.794: 1.1086% ( 28) 00:08:05.200 8166.794 - 8217.206: 1.3170% ( 28) 00:08:05.200 8217.206 - 8267.618: 1.6667% ( 47) 00:08:05.200 8267.618 - 8318.031: 2.1652% ( 67) 00:08:05.200 8318.031 - 8368.443: 2.8571% ( 93) 00:08:05.200 8368.443 - 8418.855: 3.7054% ( 114) 00:08:05.200 8418.855 - 8469.268: 5.0893% ( 186) 00:08:05.200 8469.268 - 8519.680: 6.7188% ( 219) 00:08:05.200 8519.680 - 8570.092: 8.8393% ( 285) 00:08:05.200 8570.092 - 8620.505: 11.5476% ( 364) 00:08:05.200 8620.505 - 8670.917: 14.2485% ( 363) 00:08:05.200 8670.917 - 8721.329: 17.1503% ( 390) 00:08:05.200 8721.329 - 8771.742: 20.1935% ( 409) 00:08:05.200 8771.742 - 8822.154: 23.6533% ( 465) 00:08:05.200 8822.154 - 8872.566: 27.0759% ( 460) 00:08:05.200 8872.566 - 8922.978: 30.5580% ( 468) 00:08:05.200 8922.978 - 8973.391: 34.2336% ( 494) 00:08:05.200 8973.391 - 9023.803: 38.1101% ( 521) 00:08:05.200 9023.803 - 9074.215: 41.8452% ( 502) 00:08:05.200 9074.215 - 9124.628: 45.4762% ( 488) 00:08:05.200 9124.628 - 9175.040: 49.3750% ( 524) 00:08:05.200 9175.040 - 9225.452: 53.2515% ( 521) 00:08:05.200 9225.452 - 9275.865: 56.9196% ( 493) 00:08:05.200 9275.865 - 9326.277: 60.5208% ( 484) 00:08:05.200 9326.277 - 9376.689: 63.9211% ( 457) 00:08:05.200 9376.689 - 9427.102: 66.8304% ( 391) 00:08:05.200 9427.102 - 9477.514: 69.6131% ( 374) 00:08:05.200 9477.514 - 9527.926: 71.7857% ( 292) 00:08:05.200 9527.926 - 9578.338: 73.8690% ( 280) 00:08:05.200 9578.338 - 9628.751: 75.7143% ( 248) 00:08:05.200 9628.751 - 9679.163: 77.5446% ( 246) 00:08:05.200 9679.163 - 9729.575: 79.1369% ( 214) 00:08:05.200 9729.575 - 9779.988: 80.5283% ( 187) 00:08:05.200 9779.988 - 9830.400: 81.7708% ( 167) 00:08:05.200 9830.400 - 9880.812: 82.7902% ( 137) 00:08:05.200 9880.812 - 9931.225: 83.6905% ( 121) 00:08:05.200 9931.225 - 9981.637: 84.4568% ( 103) 00:08:05.200 9981.637 - 10032.049: 85.1711% ( 96) 00:08:05.200 10032.049 - 10082.462: 85.8333% ( 89) 00:08:05.200 10082.462 - 10132.874: 86.4062% ( 77) 00:08:05.200 10132.874 - 10183.286: 86.9345% ( 71) 00:08:05.200 10183.286 - 10233.698: 87.4330% ( 67) 00:08:05.200 10233.698 - 10284.111: 87.9539% ( 70) 00:08:05.201 10284.111 - 10334.523: 88.4077% ( 61) 00:08:05.201 10334.523 - 10384.935: 88.7872% ( 51) 00:08:05.201 10384.935 - 10435.348: 89.1220% ( 45) 00:08:05.201 10435.348 - 10485.760: 89.4717% ( 47) 00:08:05.201 10485.760 - 10536.172: 89.7247% ( 34) 00:08:05.201 10536.172 - 10586.585: 89.9330% ( 28) 00:08:05.201 10586.585 - 10636.997: 90.1042% ( 23) 00:08:05.201 10636.997 - 10687.409: 90.2827% ( 24) 00:08:05.201 10687.409 - 10737.822: 90.5208% ( 32) 00:08:05.201 10737.822 - 10788.234: 90.7292% ( 28) 00:08:05.201 10788.234 - 10838.646: 90.9152% ( 25) 00:08:05.201 10838.646 - 10889.058: 91.1235% ( 28) 00:08:05.201 10889.058 - 10939.471: 91.3839% ( 35) 00:08:05.201 10939.471 - 10989.883: 91.6220% ( 32) 00:08:05.201 10989.883 - 11040.295: 91.8601% ( 32) 00:08:05.201 11040.295 - 11090.708: 92.1875% ( 44) 00:08:05.201 11090.708 - 11141.120: 92.4628% ( 37) 00:08:05.201 11141.120 - 11191.532: 92.6935% ( 31) 00:08:05.201 11191.532 - 11241.945: 92.9911% ( 40) 00:08:05.201 11241.945 - 11292.357: 93.2440% ( 34) 00:08:05.201 11292.357 - 11342.769: 93.4970% ( 34) 00:08:05.201 11342.769 - 11393.182: 93.7202% ( 30) 00:08:05.201 11393.182 - 11443.594: 93.9360% ( 29) 00:08:05.201 11443.594 - 11494.006: 94.1741% ( 32) 00:08:05.201 11494.006 - 11544.418: 94.3973% ( 30) 00:08:05.201 11544.418 - 11594.831: 94.6205% ( 30) 00:08:05.201 11594.831 - 11645.243: 94.7991% ( 24) 00:08:05.201 11645.243 - 11695.655: 94.9554% ( 21) 00:08:05.201 11695.655 - 11746.068: 95.1116% ( 21) 00:08:05.201 11746.068 - 11796.480: 95.2604% ( 20) 00:08:05.201 11796.480 - 11846.892: 95.4092% ( 20) 00:08:05.201 11846.892 - 11897.305: 95.5506% ( 19) 00:08:05.201 11897.305 - 11947.717: 95.6994% ( 20) 00:08:05.201 11947.717 - 11998.129: 95.8185% ( 16) 00:08:05.201 11998.129 - 12048.542: 95.9077% ( 12) 00:08:05.201 12048.542 - 12098.954: 95.9598% ( 7) 00:08:05.201 12098.954 - 12149.366: 96.0193% ( 8) 00:08:05.201 12149.366 - 12199.778: 96.0863% ( 9) 00:08:05.201 12199.778 - 12250.191: 96.1310% ( 6) 00:08:05.201 12250.191 - 12300.603: 96.1756% ( 6) 00:08:05.201 12300.603 - 12351.015: 96.2054% ( 4) 00:08:05.201 12351.015 - 12401.428: 96.2277% ( 3) 00:08:05.201 12401.428 - 12451.840: 96.2500% ( 3) 00:08:05.201 12451.840 - 12502.252: 96.2798% ( 4) 00:08:05.201 12502.252 - 12552.665: 96.2946% ( 2) 00:08:05.201 12552.665 - 12603.077: 96.3021% ( 1) 00:08:05.201 12603.077 - 12653.489: 96.3170% ( 2) 00:08:05.201 12653.489 - 12703.902: 96.3318% ( 2) 00:08:05.201 12703.902 - 12754.314: 96.3467% ( 2) 00:08:05.201 12754.314 - 12804.726: 96.3616% ( 2) 00:08:05.201 12804.726 - 12855.138: 96.3690% ( 1) 00:08:05.201 12855.138 - 12905.551: 96.3839% ( 2) 00:08:05.201 12905.551 - 13006.375: 96.4360% ( 7) 00:08:05.201 13006.375 - 13107.200: 96.5551% ( 16) 00:08:05.201 13107.200 - 13208.025: 96.6741% ( 16) 00:08:05.201 13208.025 - 13308.849: 96.8304% ( 21) 00:08:05.201 13308.849 - 13409.674: 97.0164% ( 25) 00:08:05.201 13409.674 - 13510.498: 97.1949% ( 24) 00:08:05.201 13510.498 - 13611.323: 97.3661% ( 23) 00:08:05.201 13611.323 - 13712.148: 97.5446% ( 24) 00:08:05.201 13712.148 - 13812.972: 97.7232% ( 24) 00:08:05.201 13812.972 - 13913.797: 97.8869% ( 22) 00:08:05.201 13913.797 - 14014.622: 98.0580% ( 23) 00:08:05.201 14014.622 - 14115.446: 98.1771% ( 16) 00:08:05.201 14115.446 - 14216.271: 98.2664% ( 12) 00:08:05.201 14216.271 - 14317.095: 98.3333% ( 9) 00:08:05.201 14317.095 - 14417.920: 98.3854% ( 7) 00:08:05.201 14417.920 - 14518.745: 98.4152% ( 4) 00:08:05.201 14518.745 - 14619.569: 98.4375% ( 3) 00:08:05.201 14619.569 - 14720.394: 98.4673% ( 4) 00:08:05.201 14720.394 - 14821.218: 98.4896% ( 3) 00:08:05.201 14821.218 - 14922.043: 98.5119% ( 3) 00:08:05.201 14922.043 - 15022.868: 98.5417% ( 4) 00:08:05.201 15022.868 - 15123.692: 98.5714% ( 4) 00:08:05.201 15829.465 - 15930.289: 98.6384% ( 9) 00:08:05.201 15930.289 - 16031.114: 98.6830% ( 6) 00:08:05.201 16031.114 - 16131.938: 98.7351% ( 7) 00:08:05.201 16131.938 - 16232.763: 98.7872% ( 7) 00:08:05.201 16232.763 - 16333.588: 98.8393% ( 7) 00:08:05.201 16333.588 - 16434.412: 98.8839% ( 6) 00:08:05.201 16434.412 - 16535.237: 98.9286% ( 6) 00:08:05.201 16535.237 - 16636.062: 98.9807% ( 7) 00:08:05.201 16636.062 - 16736.886: 99.0253% ( 6) 00:08:05.201 16736.886 - 16837.711: 99.0476% ( 3) 00:08:05.201 17845.957 - 17946.782: 99.0699% ( 3) 00:08:05.201 17946.782 - 18047.606: 99.0923% ( 3) 00:08:05.201 18047.606 - 18148.431: 99.1220% ( 4) 00:08:05.201 18148.431 - 18249.255: 99.1443% ( 3) 00:08:05.201 18249.255 - 18350.080: 99.1741% ( 4) 00:08:05.201 18350.080 - 18450.905: 99.1964% ( 3) 00:08:05.201 18450.905 - 18551.729: 99.2262% ( 4) 00:08:05.201 18551.729 - 18652.554: 99.2485% ( 3) 00:08:05.201 18652.554 - 18753.378: 99.2708% ( 3) 00:08:05.201 18753.378 - 18854.203: 99.3006% ( 4) 00:08:05.201 18854.203 - 18955.028: 99.3229% ( 3) 00:08:05.201 18955.028 - 19055.852: 99.3378% ( 2) 00:08:05.201 19055.852 - 19156.677: 99.3676% ( 4) 00:08:05.201 19156.677 - 19257.502: 99.3899% ( 3) 00:08:05.201 19257.502 - 19358.326: 99.4122% ( 3) 00:08:05.201 19358.326 - 19459.151: 99.4420% ( 4) 00:08:05.201 19459.151 - 19559.975: 99.4643% ( 3) 00:08:05.201 19559.975 - 19660.800: 99.4940% ( 4) 00:08:05.201 19660.800 - 19761.625: 99.5164% ( 3) 00:08:05.201 19761.625 - 19862.449: 99.5238% ( 1) 00:08:05.201 24298.732 - 24399.557: 99.5685% ( 6) 00:08:05.201 24399.557 - 24500.382: 99.6205% ( 7) 00:08:05.201 24500.382 - 24601.206: 99.6652% ( 6) 00:08:05.201 24601.206 - 24702.031: 99.7098% ( 6) 00:08:05.201 24702.031 - 24802.855: 99.7545% ( 6) 00:08:05.201 24802.855 - 24903.680: 99.7991% ( 6) 00:08:05.201 24903.680 - 25004.505: 99.8438% ( 6) 00:08:05.201 25004.505 - 25105.329: 99.8958% ( 7) 00:08:05.201 25105.329 - 25206.154: 99.9405% ( 6) 00:08:05.201 25206.154 - 25306.978: 99.9851% ( 6) 00:08:05.201 25306.978 - 25407.803: 100.0000% ( 2) 00:08:05.201 00:08:05.201 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:05.201 ============================================================================== 00:08:05.201 Range in us Cumulative IO count 00:08:05.201 4310.252 - 4335.458: 0.0223% ( 3) 00:08:05.201 4335.458 - 4360.665: 0.0372% ( 2) 00:08:05.201 4360.665 - 4385.871: 0.0521% ( 2) 00:08:05.201 4385.871 - 4411.077: 0.0595% ( 1) 00:08:05.201 4411.077 - 4436.283: 0.0818% ( 3) 00:08:05.201 4436.283 - 4461.489: 0.1042% ( 3) 00:08:05.201 4461.489 - 4486.695: 0.1265% ( 3) 00:08:05.201 4486.695 - 4511.902: 0.1414% ( 2) 00:08:05.201 4511.902 - 4537.108: 0.1488% ( 1) 00:08:05.201 4537.108 - 4562.314: 0.1637% ( 2) 00:08:05.201 4562.314 - 4587.520: 0.1786% ( 2) 00:08:05.201 4587.520 - 4612.726: 0.1935% ( 2) 00:08:05.201 4612.726 - 4637.932: 0.2083% ( 2) 00:08:05.201 4637.932 - 4663.138: 0.2232% ( 2) 00:08:05.201 4663.138 - 4688.345: 0.2381% ( 2) 00:08:05.201 4688.345 - 4713.551: 0.2530% ( 2) 00:08:05.201 4713.551 - 4738.757: 0.2753% ( 3) 00:08:05.201 4738.757 - 4763.963: 0.2902% ( 2) 00:08:05.201 4763.963 - 4789.169: 0.3051% ( 2) 00:08:05.201 4789.169 - 4814.375: 0.3199% ( 2) 00:08:05.201 4814.375 - 4839.582: 0.3348% ( 2) 00:08:05.201 4839.582 - 4864.788: 0.3497% ( 2) 00:08:05.201 4864.788 - 4889.994: 0.3720% ( 3) 00:08:05.201 4889.994 - 4915.200: 0.3869% ( 2) 00:08:05.201 4915.200 - 4940.406: 0.4018% ( 2) 00:08:05.202 4940.406 - 4965.612: 0.4092% ( 1) 00:08:05.202 4965.612 - 4990.818: 0.4241% ( 2) 00:08:05.202 4990.818 - 5016.025: 0.4464% ( 3) 00:08:05.202 5016.025 - 5041.231: 0.4613% ( 2) 00:08:05.202 5041.231 - 5066.437: 0.4762% ( 2) 00:08:05.202 7813.908 - 7864.320: 0.4985% ( 3) 00:08:05.202 7864.320 - 7914.732: 0.5432% ( 6) 00:08:05.202 7914.732 - 7965.145: 0.5952% ( 7) 00:08:05.202 7965.145 - 8015.557: 0.6696% ( 10) 00:08:05.202 8015.557 - 8065.969: 0.7589% ( 12) 00:08:05.202 8065.969 - 8116.382: 0.9301% ( 23) 00:08:05.202 8116.382 - 8166.794: 1.2872% ( 48) 00:08:05.202 8166.794 - 8217.206: 1.5848% ( 40) 00:08:05.202 8217.206 - 8267.618: 1.9494% ( 49) 00:08:05.202 8267.618 - 8318.031: 2.4182% ( 63) 00:08:05.202 8318.031 - 8368.443: 3.1101% ( 93) 00:08:05.202 8368.443 - 8418.855: 4.0551% ( 127) 00:08:05.202 8418.855 - 8469.268: 5.6622% ( 216) 00:08:05.202 8469.268 - 8519.680: 7.7753% ( 284) 00:08:05.202 8519.680 - 8570.092: 9.7842% ( 270) 00:08:05.202 8570.092 - 8620.505: 12.2545% ( 332) 00:08:05.202 8620.505 - 8670.917: 15.2530% ( 403) 00:08:05.202 8670.917 - 8721.329: 18.2292% ( 400) 00:08:05.202 8721.329 - 8771.742: 21.4062% ( 427) 00:08:05.202 8771.742 - 8822.154: 24.7619% ( 451) 00:08:05.202 8822.154 - 8872.566: 28.0208% ( 438) 00:08:05.202 8872.566 - 8922.978: 31.5253% ( 471) 00:08:05.202 8922.978 - 8973.391: 35.1414% ( 486) 00:08:05.202 8973.391 - 9023.803: 38.7946% ( 491) 00:08:05.202 9023.803 - 9074.215: 42.5223% ( 501) 00:08:05.202 9074.215 - 9124.628: 46.3393% ( 513) 00:08:05.202 9124.628 - 9175.040: 50.0000% ( 492) 00:08:05.202 9175.040 - 9225.452: 53.7649% ( 506) 00:08:05.202 9225.452 - 9275.865: 57.5000% ( 502) 00:08:05.202 9275.865 - 9326.277: 61.1161% ( 486) 00:08:05.202 9326.277 - 9376.689: 64.4940% ( 454) 00:08:05.202 9376.689 - 9427.102: 67.3884% ( 389) 00:08:05.202 9427.102 - 9477.514: 69.9256% ( 341) 00:08:05.202 9477.514 - 9527.926: 72.2247% ( 309) 00:08:05.202 9527.926 - 9578.338: 74.2262% ( 269) 00:08:05.202 9578.338 - 9628.751: 76.1979% ( 265) 00:08:05.202 9628.751 - 9679.163: 77.8646% ( 224) 00:08:05.202 9679.163 - 9729.575: 79.4494% ( 213) 00:08:05.202 9729.575 - 9779.988: 80.8185% ( 184) 00:08:05.202 9779.988 - 9830.400: 81.9717% ( 155) 00:08:05.202 9830.400 - 9880.812: 83.0506% ( 145) 00:08:05.202 9880.812 - 9931.225: 83.9509% ( 121) 00:08:05.202 9931.225 - 9981.637: 84.8214% ( 117) 00:08:05.202 9981.637 - 10032.049: 85.5804% ( 102) 00:08:05.202 10032.049 - 10082.462: 86.2500% ( 90) 00:08:05.202 10082.462 - 10132.874: 86.7932% ( 73) 00:08:05.202 10132.874 - 10183.286: 87.3810% ( 79) 00:08:05.202 10183.286 - 10233.698: 87.9167% ( 72) 00:08:05.202 10233.698 - 10284.111: 88.3557% ( 59) 00:08:05.202 10284.111 - 10334.523: 88.7202% ( 49) 00:08:05.202 10334.523 - 10384.935: 89.0402% ( 43) 00:08:05.202 10384.935 - 10435.348: 89.3304% ( 39) 00:08:05.202 10435.348 - 10485.760: 89.6577% ( 44) 00:08:05.202 10485.760 - 10536.172: 89.9628% ( 41) 00:08:05.202 10536.172 - 10586.585: 90.2530% ( 39) 00:08:05.202 10586.585 - 10636.997: 90.5432% ( 39) 00:08:05.202 10636.997 - 10687.409: 90.7887% ( 33) 00:08:05.202 10687.409 - 10737.822: 91.0268% ( 32) 00:08:05.202 10737.822 - 10788.234: 91.2277% ( 27) 00:08:05.202 10788.234 - 10838.646: 91.4286% ( 27) 00:08:05.202 10838.646 - 10889.058: 91.6741% ( 33) 00:08:05.202 10889.058 - 10939.471: 91.8527% ( 24) 00:08:05.202 10939.471 - 10989.883: 92.0015% ( 20) 00:08:05.202 10989.883 - 11040.295: 92.1577% ( 21) 00:08:05.202 11040.295 - 11090.708: 92.3661% ( 28) 00:08:05.202 11090.708 - 11141.120: 92.5893% ( 30) 00:08:05.202 11141.120 - 11191.532: 92.7530% ( 22) 00:08:05.202 11191.532 - 11241.945: 92.9464% ( 26) 00:08:05.202 11241.945 - 11292.357: 93.1027% ( 21) 00:08:05.202 11292.357 - 11342.769: 93.3110% ( 28) 00:08:05.202 11342.769 - 11393.182: 93.4673% ( 21) 00:08:05.202 11393.182 - 11443.594: 93.6384% ( 23) 00:08:05.202 11443.594 - 11494.006: 93.8244% ( 25) 00:08:05.202 11494.006 - 11544.418: 94.0104% ( 25) 00:08:05.202 11544.418 - 11594.831: 94.2039% ( 26) 00:08:05.202 11594.831 - 11645.243: 94.4122% ( 28) 00:08:05.202 11645.243 - 11695.655: 94.5982% ( 25) 00:08:05.202 11695.655 - 11746.068: 94.7470% ( 20) 00:08:05.202 11746.068 - 11796.480: 94.8661% ( 16) 00:08:05.202 11796.480 - 11846.892: 94.9777% ( 15) 00:08:05.202 11846.892 - 11897.305: 95.0893% ( 15) 00:08:05.202 11897.305 - 11947.717: 95.1935% ( 14) 00:08:05.202 11947.717 - 11998.129: 95.3125% ( 16) 00:08:05.202 11998.129 - 12048.542: 95.4167% ( 14) 00:08:05.202 12048.542 - 12098.954: 95.4985% ( 11) 00:08:05.202 12098.954 - 12149.366: 95.6027% ( 14) 00:08:05.202 12149.366 - 12199.778: 95.7143% ( 15) 00:08:05.202 12199.778 - 12250.191: 95.8408% ( 17) 00:08:05.202 12250.191 - 12300.603: 95.9449% ( 14) 00:08:05.202 12300.603 - 12351.015: 96.0342% ( 12) 00:08:05.202 12351.015 - 12401.428: 96.1310% ( 13) 00:08:05.202 12401.428 - 12451.840: 96.1756% ( 6) 00:08:05.202 12451.840 - 12502.252: 96.2500% ( 10) 00:08:05.202 12502.252 - 12552.665: 96.3095% ( 8) 00:08:05.202 12552.665 - 12603.077: 96.3765% ( 9) 00:08:05.202 12603.077 - 12653.489: 96.4509% ( 10) 00:08:05.202 12653.489 - 12703.902: 96.5104% ( 8) 00:08:05.202 12703.902 - 12754.314: 96.5625% ( 7) 00:08:05.202 12754.314 - 12804.726: 96.6146% ( 7) 00:08:05.202 12804.726 - 12855.138: 96.6741% ( 8) 00:08:05.202 12855.138 - 12905.551: 96.7411% ( 9) 00:08:05.202 12905.551 - 13006.375: 96.8304% ( 12) 00:08:05.202 13006.375 - 13107.200: 96.9271% ( 13) 00:08:05.202 13107.200 - 13208.025: 96.9866% ( 8) 00:08:05.203 13208.025 - 13308.849: 97.0387% ( 7) 00:08:05.203 13308.849 - 13409.674: 97.1503% ( 15) 00:08:05.203 13409.674 - 13510.498: 97.2396% ( 12) 00:08:05.203 13510.498 - 13611.323: 97.3438% ( 14) 00:08:05.203 13611.323 - 13712.148: 97.4479% ( 14) 00:08:05.203 13712.148 - 13812.972: 97.5446% ( 13) 00:08:05.203 13812.972 - 13913.797: 97.6339% ( 12) 00:08:05.203 13913.797 - 14014.622: 97.7381% ( 14) 00:08:05.203 14014.622 - 14115.446: 97.8199% ( 11) 00:08:05.203 14115.446 - 14216.271: 97.9241% ( 14) 00:08:05.203 14216.271 - 14317.095: 97.9688% ( 6) 00:08:05.203 14317.095 - 14417.920: 98.0208% ( 7) 00:08:05.203 14417.920 - 14518.745: 98.0878% ( 9) 00:08:05.203 14518.745 - 14619.569: 98.1548% ( 9) 00:08:05.203 14619.569 - 14720.394: 98.2292% ( 10) 00:08:05.203 14720.394 - 14821.218: 98.2738% ( 6) 00:08:05.203 14821.218 - 14922.043: 98.3259% ( 7) 00:08:05.203 14922.043 - 15022.868: 98.3631% ( 5) 00:08:05.203 15022.868 - 15123.692: 98.4077% ( 6) 00:08:05.203 15123.692 - 15224.517: 98.4598% ( 7) 00:08:05.203 15224.517 - 15325.342: 98.5045% ( 6) 00:08:05.203 15325.342 - 15426.166: 98.5491% ( 6) 00:08:05.203 15426.166 - 15526.991: 98.5714% ( 3) 00:08:05.203 16636.062 - 16736.886: 98.5938% ( 3) 00:08:05.203 16736.886 - 16837.711: 98.6161% ( 3) 00:08:05.203 16837.711 - 16938.535: 98.6458% ( 4) 00:08:05.203 16938.535 - 17039.360: 98.6756% ( 4) 00:08:05.203 17039.360 - 17140.185: 98.7054% ( 4) 00:08:05.203 17140.185 - 17241.009: 98.7277% ( 3) 00:08:05.203 17241.009 - 17341.834: 98.7574% ( 4) 00:08:05.203 17341.834 - 17442.658: 98.8095% ( 7) 00:08:05.203 17442.658 - 17543.483: 98.8616% ( 7) 00:08:05.203 17543.483 - 17644.308: 98.9062% ( 6) 00:08:05.203 17644.308 - 17745.132: 98.9583% ( 7) 00:08:05.203 17745.132 - 17845.957: 99.0104% ( 7) 00:08:05.203 17845.957 - 17946.782: 99.0476% ( 5) 00:08:05.203 18148.431 - 18249.255: 99.0699% ( 3) 00:08:05.203 18249.255 - 18350.080: 99.0923% ( 3) 00:08:05.203 18350.080 - 18450.905: 99.1146% ( 3) 00:08:05.203 18450.905 - 18551.729: 99.1443% ( 4) 00:08:05.203 18551.729 - 18652.554: 99.1667% ( 3) 00:08:05.203 18652.554 - 18753.378: 99.1964% ( 4) 00:08:05.203 18753.378 - 18854.203: 99.2188% ( 3) 00:08:05.203 18854.203 - 18955.028: 99.2411% ( 3) 00:08:05.203 18955.028 - 19055.852: 99.2634% ( 3) 00:08:05.203 19055.852 - 19156.677: 99.2932% ( 4) 00:08:05.203 19156.677 - 19257.502: 99.3155% ( 3) 00:08:05.203 19257.502 - 19358.326: 99.3378% ( 3) 00:08:05.203 19358.326 - 19459.151: 99.3676% ( 4) 00:08:05.203 19459.151 - 19559.975: 99.3899% ( 3) 00:08:05.203 19559.975 - 19660.800: 99.4122% ( 3) 00:08:05.203 19660.800 - 19761.625: 99.4345% ( 3) 00:08:05.203 19761.625 - 19862.449: 99.4643% ( 4) 00:08:05.203 19862.449 - 19963.274: 99.4866% ( 3) 00:08:05.203 19963.274 - 20064.098: 99.5089% ( 3) 00:08:05.203 20064.098 - 20164.923: 99.5238% ( 2) 00:08:05.203 24197.908 - 24298.732: 99.5536% ( 4) 00:08:05.203 24298.732 - 24399.557: 99.5982% ( 6) 00:08:05.203 24399.557 - 24500.382: 99.6503% ( 7) 00:08:05.203 24500.382 - 24601.206: 99.6949% ( 6) 00:08:05.203 24601.206 - 24702.031: 99.7396% ( 6) 00:08:05.203 24702.031 - 24802.855: 99.7842% ( 6) 00:08:05.203 24802.855 - 24903.680: 99.8289% ( 6) 00:08:05.203 24903.680 - 25004.505: 99.8735% ( 6) 00:08:05.203 25004.505 - 25105.329: 99.9182% ( 6) 00:08:05.203 25105.329 - 25206.154: 99.9702% ( 7) 00:08:05.203 25206.154 - 25306.978: 100.0000% ( 4) 00:08:05.203 00:08:05.203 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:05.203 ============================================================================== 00:08:05.203 Range in us Cumulative IO count 00:08:05.203 3881.748 - 3906.954: 0.0595% ( 8) 00:08:05.203 3906.954 - 3932.160: 0.0670% ( 1) 00:08:05.203 3932.160 - 3957.366: 0.0818% ( 2) 00:08:05.203 3957.366 - 3982.572: 0.0967% ( 2) 00:08:05.203 3982.572 - 4007.778: 0.1042% ( 1) 00:08:05.203 4007.778 - 4032.985: 0.1190% ( 2) 00:08:05.203 4032.985 - 4058.191: 0.1339% ( 2) 00:08:05.203 4058.191 - 4083.397: 0.1488% ( 2) 00:08:05.203 4083.397 - 4108.603: 0.1637% ( 2) 00:08:05.203 4108.603 - 4133.809: 0.1860% ( 3) 00:08:05.203 4133.809 - 4159.015: 0.2009% ( 2) 00:08:05.203 4159.015 - 4184.222: 0.2158% ( 2) 00:08:05.203 4184.222 - 4209.428: 0.2307% ( 2) 00:08:05.203 4209.428 - 4234.634: 0.2455% ( 2) 00:08:05.203 4234.634 - 4259.840: 0.2679% ( 3) 00:08:05.203 4259.840 - 4285.046: 0.2827% ( 2) 00:08:05.203 4285.046 - 4310.252: 0.2976% ( 2) 00:08:05.203 4310.252 - 4335.458: 0.3125% ( 2) 00:08:05.203 4335.458 - 4360.665: 0.3274% ( 2) 00:08:05.203 4360.665 - 4385.871: 0.3423% ( 2) 00:08:05.203 4385.871 - 4411.077: 0.3571% ( 2) 00:08:05.203 4411.077 - 4436.283: 0.3795% ( 3) 00:08:05.203 4436.283 - 4461.489: 0.3943% ( 2) 00:08:05.203 4461.489 - 4486.695: 0.4092% ( 2) 00:08:05.203 4486.695 - 4511.902: 0.4241% ( 2) 00:08:05.203 4511.902 - 4537.108: 0.4390% ( 2) 00:08:05.203 4537.108 - 4562.314: 0.4539% ( 2) 00:08:05.203 4562.314 - 4587.520: 0.4688% ( 2) 00:08:05.203 4587.520 - 4612.726: 0.4762% ( 1) 00:08:05.203 7461.022 - 7511.434: 0.4836% ( 1) 00:08:05.203 7511.434 - 7561.846: 0.5283% ( 6) 00:08:05.203 7561.846 - 7612.258: 0.5655% ( 5) 00:08:05.203 7612.258 - 7662.671: 0.5952% ( 4) 00:08:05.203 7662.671 - 7713.083: 0.6250% ( 4) 00:08:05.203 7713.083 - 7763.495: 0.6548% ( 4) 00:08:05.203 7763.495 - 7813.908: 0.7515% ( 13) 00:08:05.203 7813.908 - 7864.320: 0.8631% ( 15) 00:08:05.203 7864.320 - 7914.732: 0.9449% ( 11) 00:08:05.203 7914.732 - 7965.145: 1.0119% ( 9) 00:08:05.203 7965.145 - 8015.557: 1.0938% ( 11) 00:08:05.203 8015.557 - 8065.969: 1.2128% ( 16) 00:08:05.203 8065.969 - 8116.382: 1.3393% ( 17) 00:08:05.203 8116.382 - 8166.794: 1.5179% ( 24) 00:08:05.203 8166.794 - 8217.206: 1.7411% ( 30) 00:08:05.203 8217.206 - 8267.618: 2.0089% ( 36) 00:08:05.203 8267.618 - 8318.031: 2.4330% ( 57) 00:08:05.203 8318.031 - 8368.443: 3.1399% ( 95) 00:08:05.203 8368.443 - 8418.855: 3.9881% ( 114) 00:08:05.203 8418.855 - 8469.268: 5.2679% ( 172) 00:08:05.203 8469.268 - 8519.680: 7.0759% ( 243) 00:08:05.203 8519.680 - 8570.092: 9.1146% ( 274) 00:08:05.203 8570.092 - 8620.505: 11.5030% ( 321) 00:08:05.203 8620.505 - 8670.917: 14.2634% ( 371) 00:08:05.203 8670.917 - 8721.329: 17.2619% ( 403) 00:08:05.203 8721.329 - 8771.742: 20.6250% ( 452) 00:08:05.203 8771.742 - 8822.154: 24.0253% ( 457) 00:08:05.203 8822.154 - 8872.566: 27.2917% ( 439) 00:08:05.203 8872.566 - 8922.978: 30.9226% ( 488) 00:08:05.203 8922.978 - 8973.391: 34.6577% ( 502) 00:08:05.203 8973.391 - 9023.803: 38.4152% ( 505) 00:08:05.203 9023.803 - 9074.215: 42.2098% ( 510) 00:08:05.203 9074.215 - 9124.628: 45.9821% ( 507) 00:08:05.203 9124.628 - 9175.040: 49.8884% ( 525) 00:08:05.203 9175.040 - 9225.452: 53.7798% ( 523) 00:08:05.204 9225.452 - 9275.865: 57.6711% ( 523) 00:08:05.204 9275.865 - 9326.277: 61.4658% ( 510) 00:08:05.204 9326.277 - 9376.689: 64.8958% ( 461) 00:08:05.204 9376.689 - 9427.102: 68.0580% ( 425) 00:08:05.204 9427.102 - 9477.514: 70.8185% ( 371) 00:08:05.204 9477.514 - 9527.926: 73.2812% ( 331) 00:08:05.204 9527.926 - 9578.338: 75.2083% ( 259) 00:08:05.204 9578.338 - 9628.751: 77.1057% ( 255) 00:08:05.204 9628.751 - 9679.163: 78.7723% ( 224) 00:08:05.204 9679.163 - 9729.575: 80.2530% ( 199) 00:08:05.204 9729.575 - 9779.988: 81.5997% ( 181) 00:08:05.204 9779.988 - 9830.400: 82.7679% ( 157) 00:08:05.204 9830.400 - 9880.812: 83.6756% ( 122) 00:08:05.204 9880.812 - 9931.225: 84.5015% ( 111) 00:08:05.204 9931.225 - 9981.637: 85.2604% ( 102) 00:08:05.204 9981.637 - 10032.049: 86.0268% ( 103) 00:08:05.204 10032.049 - 10082.462: 86.6369% ( 82) 00:08:05.204 10082.462 - 10132.874: 87.2098% ( 77) 00:08:05.204 10132.874 - 10183.286: 87.7232% ( 69) 00:08:05.204 10183.286 - 10233.698: 88.1473% ( 57) 00:08:05.204 10233.698 - 10284.111: 88.4747% ( 44) 00:08:05.204 10284.111 - 10334.523: 88.7872% ( 42) 00:08:05.204 10334.523 - 10384.935: 89.0253% ( 32) 00:08:05.204 10384.935 - 10435.348: 89.2708% ( 33) 00:08:05.204 10435.348 - 10485.760: 89.5089% ( 32) 00:08:05.204 10485.760 - 10536.172: 89.7321% ( 30) 00:08:05.204 10536.172 - 10586.585: 89.9107% ( 24) 00:08:05.204 10586.585 - 10636.997: 90.0670% ( 21) 00:08:05.204 10636.997 - 10687.409: 90.2009% ( 18) 00:08:05.204 10687.409 - 10737.822: 90.4241% ( 30) 00:08:05.204 10737.822 - 10788.234: 90.6399% ( 29) 00:08:05.204 10788.234 - 10838.646: 90.8557% ( 29) 00:08:05.204 10838.646 - 10889.058: 91.1086% ( 34) 00:08:05.204 10889.058 - 10939.471: 91.3542% ( 33) 00:08:05.204 10939.471 - 10989.883: 91.5997% ( 33) 00:08:05.204 10989.883 - 11040.295: 91.8378% ( 32) 00:08:05.204 11040.295 - 11090.708: 92.0387% ( 27) 00:08:05.204 11090.708 - 11141.120: 92.2545% ( 29) 00:08:05.204 11141.120 - 11191.532: 92.4926% ( 32) 00:08:05.204 11191.532 - 11241.945: 92.6935% ( 27) 00:08:05.204 11241.945 - 11292.357: 92.8795% ( 25) 00:08:05.204 11292.357 - 11342.769: 93.0580% ( 24) 00:08:05.204 11342.769 - 11393.182: 93.2440% ( 25) 00:08:05.204 11393.182 - 11443.594: 93.4673% ( 30) 00:08:05.204 11443.594 - 11494.006: 93.7798% ( 42) 00:08:05.204 11494.006 - 11544.418: 94.0476% ( 36) 00:08:05.204 11544.418 - 11594.831: 94.2857% ( 32) 00:08:05.204 11594.831 - 11645.243: 94.4940% ( 28) 00:08:05.204 11645.243 - 11695.655: 94.6875% ( 26) 00:08:05.204 11695.655 - 11746.068: 94.8140% ( 17) 00:08:05.204 11746.068 - 11796.480: 95.0074% ( 26) 00:08:05.204 11796.480 - 11846.892: 95.1265% ( 16) 00:08:05.204 11846.892 - 11897.305: 95.2455% ( 16) 00:08:05.204 11897.305 - 11947.717: 95.3720% ( 17) 00:08:05.204 11947.717 - 11998.129: 95.5283% ( 21) 00:08:05.204 11998.129 - 12048.542: 95.6771% ( 20) 00:08:05.204 12048.542 - 12098.954: 95.8185% ( 19) 00:08:05.204 12098.954 - 12149.366: 95.9449% ( 17) 00:08:05.204 12149.366 - 12199.778: 96.0491% ( 14) 00:08:05.204 12199.778 - 12250.191: 96.1682% ( 16) 00:08:05.204 12250.191 - 12300.603: 96.2649% ( 13) 00:08:05.204 12300.603 - 12351.015: 96.3690% ( 14) 00:08:05.204 12351.015 - 12401.428: 96.4435% ( 10) 00:08:05.204 12401.428 - 12451.840: 96.5030% ( 8) 00:08:05.204 12451.840 - 12502.252: 96.5774% ( 10) 00:08:05.204 12502.252 - 12552.665: 96.6369% ( 8) 00:08:05.204 12552.665 - 12603.077: 96.6964% ( 8) 00:08:05.204 12603.077 - 12653.489: 96.7560% ( 8) 00:08:05.204 12653.489 - 12703.902: 96.8229% ( 9) 00:08:05.204 12703.902 - 12754.314: 96.8899% ( 9) 00:08:05.204 12754.314 - 12804.726: 96.9494% ( 8) 00:08:05.204 12804.726 - 12855.138: 96.9717% ( 3) 00:08:05.204 12855.138 - 12905.551: 97.0089% ( 5) 00:08:05.204 12905.551 - 13006.375: 97.0685% ( 8) 00:08:05.204 13006.375 - 13107.200: 97.1205% ( 7) 00:08:05.204 13107.200 - 13208.025: 97.1801% ( 8) 00:08:05.204 13208.025 - 13308.849: 97.2545% ( 10) 00:08:05.204 13308.849 - 13409.674: 97.3289% ( 10) 00:08:05.204 13409.674 - 13510.498: 97.4033% ( 10) 00:08:05.204 13510.498 - 13611.323: 97.4926% ( 12) 00:08:05.204 13611.323 - 13712.148: 97.5670% ( 10) 00:08:05.204 13712.148 - 13812.972: 97.6488% ( 11) 00:08:05.204 13812.972 - 13913.797: 97.7083% ( 8) 00:08:05.204 13913.797 - 14014.622: 97.7604% ( 7) 00:08:05.204 14014.622 - 14115.446: 97.8199% ( 8) 00:08:05.204 14115.446 - 14216.271: 97.8646% ( 6) 00:08:05.204 14216.271 - 14317.095: 97.8869% ( 3) 00:08:05.204 14317.095 - 14417.920: 97.9167% ( 4) 00:08:05.204 14417.920 - 14518.745: 97.9836% ( 9) 00:08:05.204 14518.745 - 14619.569: 98.0804% ( 13) 00:08:05.204 14619.569 - 14720.394: 98.1622% ( 11) 00:08:05.204 14720.394 - 14821.218: 98.2217% ( 8) 00:08:05.204 14821.218 - 14922.043: 98.3036% ( 11) 00:08:05.204 14922.043 - 15022.868: 98.4077% ( 14) 00:08:05.204 15022.868 - 15123.692: 98.4673% ( 8) 00:08:05.204 15123.692 - 15224.517: 98.5119% ( 6) 00:08:05.204 15224.517 - 15325.342: 98.5565% ( 6) 00:08:05.204 15325.342 - 15426.166: 98.5714% ( 2) 00:08:05.204 16333.588 - 16434.412: 98.6012% ( 4) 00:08:05.204 16434.412 - 16535.237: 98.6235% ( 3) 00:08:05.204 16535.237 - 16636.062: 98.6458% ( 3) 00:08:05.204 16636.062 - 16736.886: 98.6756% ( 4) 00:08:05.204 16736.886 - 16837.711: 98.6979% ( 3) 00:08:05.204 16837.711 - 16938.535: 98.7202% ( 3) 00:08:05.204 16938.535 - 17039.360: 98.7500% ( 4) 00:08:05.204 17039.360 - 17140.185: 98.7723% ( 3) 00:08:05.204 17140.185 - 17241.009: 98.8021% ( 4) 00:08:05.204 17241.009 - 17341.834: 98.8318% ( 4) 00:08:05.204 17341.834 - 17442.658: 98.8542% ( 3) 00:08:05.204 17442.658 - 17543.483: 98.8914% ( 5) 00:08:05.204 17543.483 - 17644.308: 98.9211% ( 4) 00:08:05.204 17644.308 - 17745.132: 98.9509% ( 4) 00:08:05.204 17745.132 - 17845.957: 98.9807% ( 4) 00:08:05.204 17845.957 - 17946.782: 99.0179% ( 5) 00:08:05.204 17946.782 - 18047.606: 99.0476% ( 4) 00:08:05.204 18450.905 - 18551.729: 99.0625% ( 2) 00:08:05.204 18551.729 - 18652.554: 99.0923% ( 4) 00:08:05.204 18652.554 - 18753.378: 99.1146% ( 3) 00:08:05.204 18753.378 - 18854.203: 99.1369% ( 3) 00:08:05.204 18854.203 - 18955.028: 99.1667% ( 4) 00:08:05.204 18955.028 - 19055.852: 99.1964% ( 4) 00:08:05.204 19055.852 - 19156.677: 99.2188% ( 3) 00:08:05.204 19156.677 - 19257.502: 99.2336% ( 2) 00:08:05.204 19257.502 - 19358.326: 99.2560% ( 3) 00:08:05.204 19358.326 - 19459.151: 99.2783% ( 3) 00:08:05.204 19459.151 - 19559.975: 99.3080% ( 4) 00:08:05.204 19559.975 - 19660.800: 99.3304% ( 3) 00:08:05.204 19660.800 - 19761.625: 99.3527% ( 3) 00:08:05.204 19761.625 - 19862.449: 99.3750% ( 3) 00:08:05.204 19862.449 - 19963.274: 99.4048% ( 4) 00:08:05.204 19963.274 - 20064.098: 99.4345% ( 4) 00:08:05.204 20064.098 - 20164.923: 99.4792% ( 6) 00:08:05.204 20164.923 - 20265.748: 99.5164% ( 5) 00:08:05.204 20265.748 - 20366.572: 99.5238% ( 1) 00:08:05.204 23794.609 - 23895.434: 99.5536% ( 4) 00:08:05.204 23895.434 - 23996.258: 99.6057% ( 7) 00:08:05.204 23996.258 - 24097.083: 99.6503% ( 6) 00:08:05.204 24097.083 - 24197.908: 99.6949% ( 6) 00:08:05.204 24197.908 - 24298.732: 99.7396% ( 6) 00:08:05.204 24298.732 - 24399.557: 99.7842% ( 6) 00:08:05.204 24399.557 - 24500.382: 99.8289% ( 6) 00:08:05.204 24500.382 - 24601.206: 99.8735% ( 6) 00:08:05.204 24601.206 - 24702.031: 99.9182% ( 6) 00:08:05.205 24702.031 - 24802.855: 99.9628% ( 6) 00:08:05.205 24802.855 - 24903.680: 100.0000% ( 5) 00:08:05.205 00:08:05.205 23:41:53 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:06.602 Initializing NVMe Controllers 00:08:06.602 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:06.602 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:06.602 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:06.602 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:06.602 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:06.602 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:06.602 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:06.602 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:06.602 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:06.602 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:06.602 Initialization complete. Launching workers. 00:08:06.602 ======================================================== 00:08:06.602 Latency(us) 00:08:06.602 Device Information : IOPS MiB/s Average min max 00:08:06.602 PCIE (0000:00:13.0) NSID 1 from core 0: 14903.27 174.65 8593.69 6088.71 26240.35 00:08:06.602 PCIE (0000:00:11.0) NSID 1 from core 0: 14903.27 174.65 8586.45 5678.08 25590.52 00:08:06.602 PCIE (0000:00:10.0) NSID 1 from core 0: 14903.27 174.65 8577.69 5318.85 25314.07 00:08:06.602 PCIE (0000:00:12.0) NSID 1 from core 0: 14903.27 174.65 8568.36 5118.64 24495.45 00:08:06.602 PCIE (0000:00:12.0) NSID 2 from core 0: 14903.27 174.65 8559.68 4342.03 24186.99 00:08:06.602 PCIE (0000:00:12.0) NSID 3 from core 0: 14903.27 174.65 8551.26 4158.38 23658.32 00:08:06.602 ======================================================== 00:08:06.602 Total : 89419.60 1047.89 8572.85 4158.38 26240.35 00:08:06.602 00:08:06.602 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:06.602 ================================================================================= 00:08:06.602 1.00000% : 7360.197us 00:08:06.602 10.00000% : 7713.083us 00:08:06.602 25.00000% : 7914.732us 00:08:06.602 50.00000% : 8217.206us 00:08:06.602 75.00000% : 8620.505us 00:08:06.602 90.00000% : 9679.163us 00:08:06.602 95.00000% : 11292.357us 00:08:06.602 98.00000% : 12804.726us 00:08:06.602 99.00000% : 13611.323us 00:08:06.602 99.50000% : 20164.923us 00:08:06.602 99.90000% : 26012.751us 00:08:06.602 99.99000% : 26214.400us 00:08:06.602 99.99900% : 26416.049us 00:08:06.602 99.99990% : 26416.049us 00:08:06.602 99.99999% : 26416.049us 00:08:06.602 00:08:06.602 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:06.602 ================================================================================= 00:08:06.602 1.00000% : 7410.609us 00:08:06.602 10.00000% : 7763.495us 00:08:06.602 25.00000% : 7914.732us 00:08:06.602 50.00000% : 8166.794us 00:08:06.602 75.00000% : 8620.505us 00:08:06.602 90.00000% : 9679.163us 00:08:06.602 95.00000% : 11241.945us 00:08:06.602 98.00000% : 12855.138us 00:08:06.602 99.00000% : 13712.148us 00:08:06.602 99.50000% : 20669.046us 00:08:06.602 99.90000% : 25407.803us 00:08:06.602 99.99000% : 25609.452us 00:08:06.602 99.99900% : 25609.452us 00:08:06.602 99.99990% : 25609.452us 00:08:06.602 99.99999% : 25609.452us 00:08:06.602 00:08:06.602 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:06.602 ================================================================================= 00:08:06.602 1.00000% : 7259.372us 00:08:06.602 10.00000% : 7612.258us 00:08:06.602 25.00000% : 7864.320us 00:08:06.602 50.00000% : 8217.206us 00:08:06.602 75.00000% : 8721.329us 00:08:06.602 90.00000% : 9729.575us 00:08:06.602 95.00000% : 11191.532us 00:08:06.602 98.00000% : 12804.726us 00:08:06.602 99.00000% : 14014.622us 00:08:06.602 99.50000% : 20064.098us 00:08:06.602 99.90000% : 25004.505us 00:08:06.602 99.99000% : 25306.978us 00:08:06.602 99.99900% : 25407.803us 00:08:06.602 99.99990% : 25407.803us 00:08:06.602 99.99999% : 25407.803us 00:08:06.602 00:08:06.602 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:06.602 ================================================================================= 00:08:06.602 1.00000% : 7410.609us 00:08:06.602 10.00000% : 7713.083us 00:08:06.602 25.00000% : 7914.732us 00:08:06.602 50.00000% : 8217.206us 00:08:06.602 75.00000% : 8620.505us 00:08:06.602 90.00000% : 9679.163us 00:08:06.602 95.00000% : 11090.708us 00:08:06.603 98.00000% : 12855.138us 00:08:06.603 99.00000% : 14216.271us 00:08:06.603 99.50000% : 19459.151us 00:08:06.603 99.90000% : 24197.908us 00:08:06.603 99.99000% : 24500.382us 00:08:06.603 99.99900% : 24500.382us 00:08:06.603 99.99990% : 24500.382us 00:08:06.603 99.99999% : 24500.382us 00:08:06.603 00:08:06.603 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:06.603 ================================================================================= 00:08:06.603 1.00000% : 7309.785us 00:08:06.603 10.00000% : 7713.083us 00:08:06.603 25.00000% : 7914.732us 00:08:06.603 50.00000% : 8217.206us 00:08:06.603 75.00000% : 8670.917us 00:08:06.603 90.00000% : 9679.163us 00:08:06.603 95.00000% : 11090.708us 00:08:06.603 98.00000% : 12804.726us 00:08:06.603 99.00000% : 13611.323us 00:08:06.603 99.50000% : 19459.151us 00:08:06.603 99.90000% : 23996.258us 00:08:06.603 99.99000% : 24197.908us 00:08:06.603 99.99900% : 24197.908us 00:08:06.603 99.99990% : 24197.908us 00:08:06.603 99.99999% : 24197.908us 00:08:06.603 00:08:06.603 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:06.603 ================================================================================= 00:08:06.603 1.00000% : 7259.372us 00:08:06.603 10.00000% : 7713.083us 00:08:06.603 25.00000% : 7914.732us 00:08:06.603 50.00000% : 8217.206us 00:08:06.603 75.00000% : 8620.505us 00:08:06.603 90.00000% : 9628.751us 00:08:06.603 95.00000% : 11141.120us 00:08:06.603 98.00000% : 12855.138us 00:08:06.603 99.00000% : 13409.674us 00:08:06.603 99.50000% : 18652.554us 00:08:06.603 99.90000% : 23391.311us 00:08:06.603 99.99000% : 23693.785us 00:08:06.603 99.99900% : 23693.785us 00:08:06.603 99.99990% : 23693.785us 00:08:06.603 99.99999% : 23693.785us 00:08:06.603 00:08:06.603 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:06.603 ============================================================================== 00:08:06.603 Range in us Cumulative IO count 00:08:06.603 6074.683 - 6099.889: 0.0134% ( 2) 00:08:06.603 6099.889 - 6125.095: 0.0268% ( 2) 00:08:06.603 6125.095 - 6150.302: 0.0536% ( 4) 00:08:06.603 6150.302 - 6175.508: 0.0805% ( 4) 00:08:06.603 6175.508 - 6200.714: 0.1207% ( 6) 00:08:06.603 6200.714 - 6225.920: 0.1542% ( 5) 00:08:06.603 6225.920 - 6251.126: 0.1878% ( 5) 00:08:06.603 6251.126 - 6276.332: 0.2280% ( 6) 00:08:06.603 6276.332 - 6301.538: 0.2481% ( 3) 00:08:06.603 6301.538 - 6326.745: 0.2615% ( 2) 00:08:06.603 6326.745 - 6351.951: 0.2749% ( 2) 00:08:06.603 6351.951 - 6377.157: 0.2884% ( 2) 00:08:06.603 6377.157 - 6402.363: 0.3018% ( 2) 00:08:06.603 6402.363 - 6427.569: 0.3152% ( 2) 00:08:06.603 6427.569 - 6452.775: 0.3219% ( 1) 00:08:06.603 6452.775 - 6503.188: 0.3487% ( 4) 00:08:06.603 6503.188 - 6553.600: 0.3755% ( 4) 00:08:06.603 6553.600 - 6604.012: 0.4024% ( 4) 00:08:06.603 6604.012 - 6654.425: 0.4292% ( 4) 00:08:06.603 7108.135 - 7158.548: 0.4493% ( 3) 00:08:06.603 7158.548 - 7208.960: 0.5164% ( 10) 00:08:06.603 7208.960 - 7259.372: 0.6371% ( 18) 00:08:06.603 7259.372 - 7309.785: 0.8047% ( 25) 00:08:06.603 7309.785 - 7360.197: 1.0931% ( 43) 00:08:06.603 7360.197 - 7410.609: 1.5357% ( 66) 00:08:06.603 7410.609 - 7461.022: 2.2867% ( 112) 00:08:06.603 7461.022 - 7511.434: 3.3798% ( 163) 00:08:06.603 7511.434 - 7561.846: 4.7076% ( 198) 00:08:06.603 7561.846 - 7612.258: 6.4780% ( 264) 00:08:06.603 7612.258 - 7662.671: 8.5569% ( 310) 00:08:06.603 7662.671 - 7713.083: 11.2862% ( 407) 00:08:06.603 7713.083 - 7763.495: 14.1296% ( 424) 00:08:06.603 7763.495 - 7813.908: 17.5429% ( 509) 00:08:06.603 7813.908 - 7864.320: 21.4726% ( 586) 00:08:06.603 7864.320 - 7914.732: 25.7444% ( 637) 00:08:06.603 7914.732 - 7965.145: 30.3045% ( 680) 00:08:06.603 7965.145 - 8015.557: 34.8243% ( 674) 00:08:06.603 8015.557 - 8065.969: 39.2369% ( 658) 00:08:06.603 8065.969 - 8116.382: 43.9244% ( 699) 00:08:06.603 8116.382 - 8166.794: 48.3704% ( 663) 00:08:06.603 8166.794 - 8217.206: 52.4879% ( 614) 00:08:06.603 8217.206 - 8267.618: 56.4042% ( 584) 00:08:06.603 8267.618 - 8318.031: 60.2200% ( 569) 00:08:06.603 8318.031 - 8368.443: 63.6937% ( 518) 00:08:06.603 8368.443 - 8418.855: 66.7784% ( 460) 00:08:06.603 8418.855 - 8469.268: 69.4608% ( 400) 00:08:06.603 8469.268 - 8519.680: 71.6805% ( 331) 00:08:06.603 8519.680 - 8570.092: 73.5917% ( 285) 00:08:06.603 8570.092 - 8620.505: 75.3152% ( 257) 00:08:06.603 8620.505 - 8670.917: 76.6564% ( 200) 00:08:06.603 8670.917 - 8721.329: 77.8098% ( 172) 00:08:06.603 8721.329 - 8771.742: 78.9163% ( 165) 00:08:06.603 8771.742 - 8822.154: 79.9490% ( 154) 00:08:06.603 8822.154 - 8872.566: 80.9013% ( 142) 00:08:06.603 8872.566 - 8922.978: 81.7194% ( 122) 00:08:06.603 8922.978 - 8973.391: 82.7052% ( 147) 00:08:06.603 8973.391 - 9023.803: 83.4630% ( 113) 00:08:06.603 9023.803 - 9074.215: 84.2945% ( 124) 00:08:06.603 9074.215 - 9124.628: 85.0657% ( 115) 00:08:06.603 9124.628 - 9175.040: 85.8101% ( 111) 00:08:06.603 9175.040 - 9225.452: 86.3734% ( 84) 00:08:06.603 9225.452 - 9275.865: 86.8898% ( 77) 00:08:06.603 9275.865 - 9326.277: 87.3189% ( 64) 00:08:06.603 9326.277 - 9376.689: 87.7817% ( 69) 00:08:06.603 9376.689 - 9427.102: 88.2310% ( 67) 00:08:06.603 9427.102 - 9477.514: 88.6467% ( 62) 00:08:06.603 9477.514 - 9527.926: 88.9887% ( 51) 00:08:06.603 9527.926 - 9578.338: 89.2905% ( 45) 00:08:06.603 9578.338 - 9628.751: 89.6593% ( 55) 00:08:06.603 9628.751 - 9679.163: 90.0148% ( 53) 00:08:06.603 9679.163 - 9729.575: 90.4439% ( 64) 00:08:06.603 9729.575 - 9779.988: 90.7591% ( 47) 00:08:06.603 9779.988 - 9830.400: 91.1749% ( 62) 00:08:06.603 9830.400 - 9880.812: 91.4767% ( 45) 00:08:06.603 9880.812 - 9931.225: 91.6980% ( 33) 00:08:06.603 9931.225 - 9981.637: 91.9058% ( 31) 00:08:06.603 9981.637 - 10032.049: 92.0534% ( 22) 00:08:06.603 10032.049 - 10082.462: 92.2009% ( 22) 00:08:06.603 10082.462 - 10132.874: 92.3350% ( 20) 00:08:06.604 10132.874 - 10183.286: 92.5496% ( 32) 00:08:06.604 10183.286 - 10233.698: 92.6301% ( 12) 00:08:06.604 10233.698 - 10284.111: 92.7240% ( 14) 00:08:06.604 10284.111 - 10334.523: 92.8112% ( 13) 00:08:06.604 10334.523 - 10384.935: 92.9386% ( 19) 00:08:06.604 10384.935 - 10435.348: 93.0861% ( 22) 00:08:06.604 10435.348 - 10485.760: 93.2068% ( 18) 00:08:06.604 10485.760 - 10536.172: 93.3007% ( 14) 00:08:06.604 10536.172 - 10586.585: 93.4683% ( 25) 00:08:06.604 10586.585 - 10636.997: 93.5756% ( 16) 00:08:06.604 10636.997 - 10687.409: 93.6293% ( 8) 00:08:06.604 10687.409 - 10737.822: 93.6964% ( 10) 00:08:06.604 10737.822 - 10788.234: 93.7701% ( 11) 00:08:06.604 10788.234 - 10838.646: 93.8506% ( 12) 00:08:06.604 10838.646 - 10889.058: 93.9378% ( 13) 00:08:06.604 10889.058 - 10939.471: 94.0786% ( 21) 00:08:06.604 10939.471 - 10989.883: 94.1993% ( 18) 00:08:06.604 10989.883 - 11040.295: 94.3066% ( 16) 00:08:06.604 11040.295 - 11090.708: 94.4407% ( 20) 00:08:06.604 11090.708 - 11141.120: 94.5950% ( 23) 00:08:06.604 11141.120 - 11191.532: 94.7827% ( 28) 00:08:06.604 11191.532 - 11241.945: 94.9303% ( 22) 00:08:06.604 11241.945 - 11292.357: 95.1650% ( 35) 00:08:06.604 11292.357 - 11342.769: 95.3527% ( 28) 00:08:06.604 11342.769 - 11393.182: 95.5137% ( 24) 00:08:06.604 11393.182 - 11443.594: 95.6344% ( 18) 00:08:06.604 11443.594 - 11494.006: 95.7618% ( 19) 00:08:06.604 11494.006 - 11544.418: 95.8624% ( 15) 00:08:06.604 11544.418 - 11594.831: 96.0837% ( 33) 00:08:06.604 11594.831 - 11645.243: 96.1977% ( 17) 00:08:06.604 11645.243 - 11695.655: 96.3117% ( 17) 00:08:06.604 11695.655 - 11746.068: 96.4056% ( 14) 00:08:06.604 11746.068 - 11796.480: 96.5062% ( 15) 00:08:06.604 11796.480 - 11846.892: 96.6068% ( 15) 00:08:06.604 11846.892 - 11897.305: 96.7006% ( 14) 00:08:06.604 11897.305 - 11947.717: 96.7744% ( 11) 00:08:06.604 11947.717 - 11998.129: 96.8348% ( 9) 00:08:06.604 11998.129 - 12048.542: 96.9018% ( 10) 00:08:06.604 12048.542 - 12098.954: 96.9622% ( 9) 00:08:06.604 12098.954 - 12149.366: 97.0494% ( 13) 00:08:06.604 12149.366 - 12199.778: 97.1030% ( 8) 00:08:06.604 12199.778 - 12250.191: 97.1634% ( 9) 00:08:06.604 12250.191 - 12300.603: 97.2237% ( 9) 00:08:06.604 12300.603 - 12351.015: 97.2841% ( 9) 00:08:06.604 12351.015 - 12401.428: 97.3645% ( 12) 00:08:06.604 12401.428 - 12451.840: 97.4450% ( 12) 00:08:06.604 12451.840 - 12502.252: 97.5389% ( 14) 00:08:06.604 12502.252 - 12552.665: 97.6328% ( 14) 00:08:06.604 12552.665 - 12603.077: 97.7267% ( 14) 00:08:06.604 12603.077 - 12653.489: 97.8004% ( 11) 00:08:06.604 12653.489 - 12703.902: 97.8809% ( 12) 00:08:06.604 12703.902 - 12754.314: 97.9681% ( 13) 00:08:06.604 12754.314 - 12804.726: 98.0754% ( 16) 00:08:06.604 12804.726 - 12855.138: 98.1290% ( 8) 00:08:06.604 12855.138 - 12905.551: 98.2028% ( 11) 00:08:06.604 12905.551 - 13006.375: 98.3168% ( 17) 00:08:06.604 13006.375 - 13107.200: 98.4308% ( 17) 00:08:06.604 13107.200 - 13208.025: 98.5381% ( 16) 00:08:06.604 13208.025 - 13308.849: 98.6387% ( 15) 00:08:06.604 13308.849 - 13409.674: 98.7527% ( 17) 00:08:06.604 13409.674 - 13510.498: 98.9807% ( 34) 00:08:06.604 13510.498 - 13611.323: 99.0813% ( 15) 00:08:06.604 13611.323 - 13712.148: 99.1215% ( 6) 00:08:06.604 13712.148 - 13812.972: 99.1416% ( 3) 00:08:06.604 19257.502 - 19358.326: 99.1550% ( 2) 00:08:06.604 19358.326 - 19459.151: 99.2154% ( 9) 00:08:06.604 19459.151 - 19559.975: 99.2690% ( 8) 00:08:06.604 19559.975 - 19660.800: 99.3361% ( 10) 00:08:06.604 19660.800 - 19761.625: 99.3763% ( 6) 00:08:06.604 19761.625 - 19862.449: 99.4099% ( 5) 00:08:06.604 19862.449 - 19963.274: 99.4434% ( 5) 00:08:06.604 19963.274 - 20064.098: 99.4702% ( 4) 00:08:06.604 20064.098 - 20164.923: 99.5038% ( 5) 00:08:06.604 20164.923 - 20265.748: 99.5306% ( 4) 00:08:06.604 20265.748 - 20366.572: 99.5641% ( 5) 00:08:06.604 20366.572 - 20467.397: 99.5708% ( 1) 00:08:06.604 24500.382 - 24601.206: 99.5842% ( 2) 00:08:06.604 24601.206 - 24702.031: 99.6043% ( 3) 00:08:06.604 24702.031 - 24802.855: 99.6178% ( 2) 00:08:06.604 24903.680 - 25004.505: 99.6245% ( 1) 00:08:06.604 25105.329 - 25206.154: 99.6312% ( 1) 00:08:06.604 25206.154 - 25306.978: 99.6647% ( 5) 00:08:06.604 25306.978 - 25407.803: 99.7116% ( 7) 00:08:06.604 25407.803 - 25508.628: 99.7385% ( 4) 00:08:06.604 25508.628 - 25609.452: 99.7720% ( 5) 00:08:06.604 25609.452 - 25710.277: 99.8055% ( 5) 00:08:06.604 25710.277 - 25811.102: 99.8458% ( 6) 00:08:06.604 25811.102 - 26012.751: 99.9195% ( 11) 00:08:06.604 26012.751 - 26214.400: 99.9933% ( 11) 00:08:06.604 26214.400 - 26416.049: 100.0000% ( 1) 00:08:06.604 00:08:06.604 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:06.604 ============================================================================== 00:08:06.604 Range in us Cumulative IO count 00:08:06.604 5671.385 - 5696.591: 0.0067% ( 1) 00:08:06.604 5873.034 - 5898.240: 0.0134% ( 1) 00:08:06.604 5898.240 - 5923.446: 0.0402% ( 4) 00:08:06.604 5923.446 - 5948.652: 0.1207% ( 12) 00:08:06.604 5948.652 - 5973.858: 0.1811% ( 9) 00:08:06.604 5973.858 - 5999.065: 0.2280% ( 7) 00:08:06.604 5999.065 - 6024.271: 0.2884% ( 9) 00:08:06.604 6024.271 - 6049.477: 0.3018% ( 2) 00:08:06.604 6049.477 - 6074.683: 0.3152% ( 2) 00:08:06.604 6074.683 - 6099.889: 0.3286% ( 2) 00:08:06.604 6099.889 - 6125.095: 0.3420% ( 2) 00:08:06.604 6125.095 - 6150.302: 0.3554% ( 2) 00:08:06.604 6150.302 - 6175.508: 0.3688% ( 2) 00:08:06.604 6175.508 - 6200.714: 0.3822% ( 2) 00:08:06.604 6200.714 - 6225.920: 0.3957% ( 2) 00:08:06.604 6225.920 - 6251.126: 0.4091% ( 2) 00:08:06.604 6251.126 - 6276.332: 0.4225% ( 2) 00:08:06.604 6276.332 - 6301.538: 0.4292% ( 1) 00:08:06.604 7007.311 - 7057.723: 0.4359% ( 1) 00:08:06.604 7208.960 - 7259.372: 0.4828% ( 7) 00:08:06.604 7259.372 - 7309.785: 0.5499% ( 10) 00:08:06.604 7309.785 - 7360.197: 0.7242% ( 26) 00:08:06.604 7360.197 - 7410.609: 1.0595% ( 50) 00:08:06.604 7410.609 - 7461.022: 1.5960% ( 80) 00:08:06.604 7461.022 - 7511.434: 2.3136% ( 107) 00:08:06.605 7511.434 - 7561.846: 3.2658% ( 142) 00:08:06.605 7561.846 - 7612.258: 5.2106% ( 290) 00:08:06.605 7612.258 - 7662.671: 7.2023% ( 297) 00:08:06.605 7662.671 - 7713.083: 9.8578% ( 396) 00:08:06.605 7713.083 - 7763.495: 13.0767% ( 480) 00:08:06.605 7763.495 - 7813.908: 16.6443% ( 532) 00:08:06.605 7813.908 - 7864.320: 20.7752% ( 616) 00:08:06.605 7864.320 - 7914.732: 25.3554% ( 683) 00:08:06.605 7914.732 - 7965.145: 30.4587% ( 761) 00:08:06.605 7965.145 - 8015.557: 35.9442% ( 818) 00:08:06.605 8015.557 - 8065.969: 41.6443% ( 850) 00:08:06.605 8065.969 - 8116.382: 46.0032% ( 650) 00:08:06.605 8116.382 - 8166.794: 50.3219% ( 644) 00:08:06.605 8166.794 - 8217.206: 54.4193% ( 611) 00:08:06.605 8217.206 - 8267.618: 58.5367% ( 614) 00:08:06.605 8267.618 - 8318.031: 62.6878% ( 619) 00:08:06.605 8318.031 - 8368.443: 65.4708% ( 415) 00:08:06.605 8368.443 - 8418.855: 67.9050% ( 363) 00:08:06.605 8418.855 - 8469.268: 70.4936% ( 386) 00:08:06.605 8469.268 - 8519.680: 72.9077% ( 360) 00:08:06.605 8519.680 - 8570.092: 74.9933% ( 311) 00:08:06.605 8570.092 - 8620.505: 76.3345% ( 200) 00:08:06.605 8620.505 - 8670.917: 77.7629% ( 213) 00:08:06.605 8670.917 - 8721.329: 78.7285% ( 144) 00:08:06.605 8721.329 - 8771.742: 79.9222% ( 178) 00:08:06.605 8771.742 - 8822.154: 80.6398% ( 107) 00:08:06.605 8822.154 - 8872.566: 81.3640% ( 108) 00:08:06.605 8872.566 - 8922.978: 82.1017% ( 110) 00:08:06.605 8922.978 - 8973.391: 82.7521% ( 97) 00:08:06.605 8973.391 - 9023.803: 83.5099% ( 113) 00:08:06.605 9023.803 - 9074.215: 84.3281% ( 122) 00:08:06.605 9074.215 - 9124.628: 84.8511% ( 78) 00:08:06.605 9124.628 - 9175.040: 85.6290% ( 116) 00:08:06.605 9175.040 - 9225.452: 86.2728% ( 96) 00:08:06.605 9225.452 - 9275.865: 86.7422% ( 70) 00:08:06.605 9275.865 - 9326.277: 87.4262% ( 102) 00:08:06.605 9326.277 - 9376.689: 87.9560% ( 79) 00:08:06.605 9376.689 - 9427.102: 88.3181% ( 54) 00:08:06.605 9427.102 - 9477.514: 88.7138% ( 59) 00:08:06.605 9477.514 - 9527.926: 89.1229% ( 61) 00:08:06.605 9527.926 - 9578.338: 89.6258% ( 75) 00:08:06.605 9578.338 - 9628.751: 89.9678% ( 51) 00:08:06.605 9628.751 - 9679.163: 90.4909% ( 78) 00:08:06.605 9679.163 - 9729.575: 90.8597% ( 55) 00:08:06.605 9729.575 - 9779.988: 91.0944% ( 35) 00:08:06.605 9779.988 - 9830.400: 91.2487% ( 23) 00:08:06.605 9830.400 - 9880.812: 91.3962% ( 22) 00:08:06.605 9880.812 - 9931.225: 91.5102% ( 17) 00:08:06.605 9931.225 - 9981.637: 91.6108% ( 15) 00:08:06.605 9981.637 - 10032.049: 91.7181% ( 16) 00:08:06.605 10032.049 - 10082.462: 91.7986% ( 12) 00:08:06.605 10082.462 - 10132.874: 91.8388% ( 6) 00:08:06.605 10132.874 - 10183.286: 91.9058% ( 10) 00:08:06.605 10183.286 - 10233.698: 91.9595% ( 8) 00:08:06.605 10233.698 - 10284.111: 92.0333% ( 11) 00:08:06.605 10284.111 - 10334.523: 92.1204% ( 13) 00:08:06.605 10334.523 - 10384.935: 92.2479% ( 19) 00:08:06.605 10384.935 - 10435.348: 92.4289% ( 27) 00:08:06.605 10435.348 - 10485.760: 92.5094% ( 12) 00:08:06.605 10485.760 - 10536.172: 92.5966% ( 13) 00:08:06.605 10536.172 - 10586.585: 92.7307% ( 20) 00:08:06.605 10586.585 - 10636.997: 92.8782% ( 22) 00:08:06.605 10636.997 - 10687.409: 93.0392% ( 24) 00:08:06.605 10687.409 - 10737.822: 93.1733% ( 20) 00:08:06.605 10737.822 - 10788.234: 93.3208% ( 22) 00:08:06.605 10788.234 - 10838.646: 93.5689% ( 37) 00:08:06.606 10838.646 - 10889.058: 93.7232% ( 23) 00:08:06.606 10889.058 - 10939.471: 93.8506% ( 19) 00:08:06.606 10939.471 - 10989.883: 93.9713% ( 18) 00:08:06.606 10989.883 - 11040.295: 94.1658% ( 29) 00:08:06.606 11040.295 - 11090.708: 94.3334% ( 25) 00:08:06.606 11090.708 - 11141.120: 94.5078% ( 26) 00:08:06.606 11141.120 - 11191.532: 94.8230% ( 47) 00:08:06.606 11191.532 - 11241.945: 95.0308% ( 31) 00:08:06.606 11241.945 - 11292.357: 95.2186% ( 28) 00:08:06.606 11292.357 - 11342.769: 95.3863% ( 25) 00:08:06.606 11342.769 - 11393.182: 95.6679% ( 42) 00:08:06.606 11393.182 - 11443.594: 95.8423% ( 26) 00:08:06.606 11443.594 - 11494.006: 95.9563% ( 17) 00:08:06.606 11494.006 - 11544.418: 96.0770% ( 18) 00:08:06.606 11544.418 - 11594.831: 96.1910% ( 17) 00:08:06.606 11594.831 - 11645.243: 96.3788% ( 28) 00:08:06.606 11645.243 - 11695.655: 96.6671% ( 43) 00:08:06.606 11695.655 - 11746.068: 96.8012% ( 20) 00:08:06.606 11746.068 - 11796.480: 96.9286% ( 19) 00:08:06.606 11796.480 - 11846.892: 97.0561% ( 19) 00:08:06.606 11846.892 - 11897.305: 97.1701% ( 17) 00:08:06.606 11897.305 - 11947.717: 97.2639% ( 14) 00:08:06.606 11947.717 - 11998.129: 97.3444% ( 12) 00:08:06.606 11998.129 - 12048.542: 97.4182% ( 11) 00:08:06.606 12048.542 - 12098.954: 97.4987% ( 12) 00:08:06.606 12098.954 - 12149.366: 97.5657% ( 10) 00:08:06.606 12149.366 - 12199.778: 97.5992% ( 5) 00:08:06.606 12199.778 - 12250.191: 97.6395% ( 6) 00:08:06.606 12250.191 - 12300.603: 97.6797% ( 6) 00:08:06.606 12300.603 - 12351.015: 97.7133% ( 5) 00:08:06.606 12351.015 - 12401.428: 97.7468% ( 5) 00:08:06.606 12401.428 - 12451.840: 97.7736% ( 4) 00:08:06.606 12451.840 - 12502.252: 97.7937% ( 3) 00:08:06.606 12502.252 - 12552.665: 97.8205% ( 4) 00:08:06.606 12552.665 - 12603.077: 97.8474% ( 4) 00:08:06.606 12603.077 - 12653.489: 97.8943% ( 7) 00:08:06.606 12653.489 - 12703.902: 97.9278% ( 5) 00:08:06.606 12703.902 - 12754.314: 97.9547% ( 4) 00:08:06.606 12754.314 - 12804.726: 97.9882% ( 5) 00:08:06.606 12804.726 - 12855.138: 98.0083% ( 3) 00:08:06.606 12855.138 - 12905.551: 98.0486% ( 6) 00:08:06.606 12905.551 - 13006.375: 98.1894% ( 21) 00:08:06.606 13006.375 - 13107.200: 98.3235% ( 20) 00:08:06.606 13107.200 - 13208.025: 98.4241% ( 15) 00:08:06.606 13208.025 - 13308.849: 98.4979% ( 11) 00:08:06.606 13308.849 - 13409.674: 98.6320% ( 20) 00:08:06.606 13409.674 - 13510.498: 98.7393% ( 16) 00:08:06.606 13510.498 - 13611.323: 98.9337% ( 29) 00:08:06.606 13611.323 - 13712.148: 99.0477% ( 17) 00:08:06.606 13712.148 - 13812.972: 99.1349% ( 13) 00:08:06.606 13812.972 - 13913.797: 99.1416% ( 1) 00:08:06.606 19660.800 - 19761.625: 99.1617% ( 3) 00:08:06.606 19761.625 - 19862.449: 99.2020% ( 6) 00:08:06.606 19862.449 - 19963.274: 99.2355% ( 5) 00:08:06.606 19963.274 - 20064.098: 99.2758% ( 6) 00:08:06.606 20064.098 - 20164.923: 99.3160% ( 6) 00:08:06.606 20164.923 - 20265.748: 99.3495% ( 5) 00:08:06.606 20265.748 - 20366.572: 99.3898% ( 6) 00:08:06.606 20366.572 - 20467.397: 99.4300% ( 6) 00:08:06.606 20467.397 - 20568.222: 99.4635% ( 5) 00:08:06.606 20568.222 - 20669.046: 99.5038% ( 6) 00:08:06.606 20669.046 - 20769.871: 99.5440% ( 6) 00:08:06.606 20769.871 - 20870.695: 99.5708% ( 4) 00:08:06.606 24399.557 - 24500.382: 99.5976% ( 4) 00:08:06.606 24500.382 - 24601.206: 99.6312% ( 5) 00:08:06.606 24601.206 - 24702.031: 99.6647% ( 5) 00:08:06.606 24702.031 - 24802.855: 99.7049% ( 6) 00:08:06.606 24802.855 - 24903.680: 99.7385% ( 5) 00:08:06.606 24903.680 - 25004.505: 99.7787% ( 6) 00:08:06.606 25004.505 - 25105.329: 99.8189% ( 6) 00:08:06.606 25105.329 - 25206.154: 99.8525% ( 5) 00:08:06.606 25206.154 - 25306.978: 99.8927% ( 6) 00:08:06.606 25306.978 - 25407.803: 99.9329% ( 6) 00:08:06.606 25407.803 - 25508.628: 99.9665% ( 5) 00:08:06.606 25508.628 - 25609.452: 100.0000% ( 5) 00:08:06.606 00:08:06.606 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:06.606 ============================================================================== 00:08:06.606 Range in us Cumulative IO count 00:08:06.606 5318.498 - 5343.705: 0.0268% ( 4) 00:08:06.606 5343.705 - 5368.911: 0.0469% ( 3) 00:08:06.606 5368.911 - 5394.117: 0.0805% ( 5) 00:08:06.606 5394.117 - 5419.323: 0.1006% ( 3) 00:08:06.606 5419.323 - 5444.529: 0.1073% ( 1) 00:08:06.606 5444.529 - 5469.735: 0.1207% ( 2) 00:08:06.606 5469.735 - 5494.942: 0.1542% ( 5) 00:08:06.606 5494.942 - 5520.148: 0.1811% ( 4) 00:08:06.606 5520.148 - 5545.354: 0.2012% ( 3) 00:08:06.606 5545.354 - 5570.560: 0.2146% ( 2) 00:08:06.607 5570.560 - 5595.766: 0.2347% ( 3) 00:08:06.607 5595.766 - 5620.972: 0.2615% ( 4) 00:08:06.607 5620.972 - 5646.178: 0.2682% ( 1) 00:08:06.607 5721.797 - 5747.003: 0.2817% ( 2) 00:08:06.607 5747.003 - 5772.209: 0.2884% ( 1) 00:08:06.607 5772.209 - 5797.415: 0.3018% ( 2) 00:08:06.607 5797.415 - 5822.622: 0.3085% ( 1) 00:08:06.607 5822.622 - 5847.828: 0.3219% ( 2) 00:08:06.607 5847.828 - 5873.034: 0.3286% ( 1) 00:08:06.607 5873.034 - 5898.240: 0.3420% ( 2) 00:08:06.607 5898.240 - 5923.446: 0.3487% ( 1) 00:08:06.607 5923.446 - 5948.652: 0.3554% ( 1) 00:08:06.607 5948.652 - 5973.858: 0.3688% ( 2) 00:08:06.607 5973.858 - 5999.065: 0.3889% ( 3) 00:08:06.607 6024.271 - 6049.477: 0.4158% ( 4) 00:08:06.607 6049.477 - 6074.683: 0.4292% ( 2) 00:08:06.607 7007.311 - 7057.723: 0.4493% ( 3) 00:08:06.607 7057.723 - 7108.135: 0.4694% ( 3) 00:08:06.607 7108.135 - 7158.548: 0.5700% ( 15) 00:08:06.607 7158.548 - 7208.960: 0.8114% ( 36) 00:08:06.607 7208.960 - 7259.372: 1.1803% ( 55) 00:08:06.607 7259.372 - 7309.785: 1.8509% ( 100) 00:08:06.607 7309.785 - 7360.197: 2.5148% ( 99) 00:08:06.607 7360.197 - 7410.609: 3.7822% ( 189) 00:08:06.607 7410.609 - 7461.022: 5.3916% ( 240) 00:08:06.607 7461.022 - 7511.434: 6.9340% ( 230) 00:08:06.607 7511.434 - 7561.846: 9.0129% ( 310) 00:08:06.607 7561.846 - 7612.258: 11.4807% ( 368) 00:08:06.607 7612.258 - 7662.671: 13.7741% ( 342) 00:08:06.607 7662.671 - 7713.083: 16.5437% ( 413) 00:08:06.607 7713.083 - 7763.495: 20.1851% ( 543) 00:08:06.607 7763.495 - 7813.908: 23.4911% ( 493) 00:08:06.607 7813.908 - 7864.320: 26.7972% ( 493) 00:08:06.607 7864.320 - 7914.732: 30.2374% ( 513) 00:08:06.607 7914.732 - 7965.145: 33.7513% ( 524) 00:08:06.607 7965.145 - 8015.557: 37.1580% ( 508) 00:08:06.607 8015.557 - 8065.969: 40.6384% ( 519) 00:08:06.607 8065.969 - 8116.382: 44.1993% ( 531) 00:08:06.607 8116.382 - 8166.794: 47.4718% ( 488) 00:08:06.607 8166.794 - 8217.206: 50.6035% ( 467) 00:08:06.607 8217.206 - 8267.618: 54.0437% ( 513) 00:08:06.607 8267.618 - 8318.031: 57.1419% ( 462) 00:08:06.607 8318.031 - 8368.443: 60.0188% ( 429) 00:08:06.607 8368.443 - 8418.855: 63.0432% ( 451) 00:08:06.607 8418.855 - 8469.268: 65.7658% ( 406) 00:08:06.607 8469.268 - 8519.680: 68.2470% ( 370) 00:08:06.607 8519.680 - 8570.092: 70.5271% ( 340) 00:08:06.607 8570.092 - 8620.505: 72.5322% ( 299) 00:08:06.607 8620.505 - 8670.917: 74.2959% ( 263) 00:08:06.607 8670.917 - 8721.329: 75.7712% ( 220) 00:08:06.607 8721.329 - 8771.742: 77.0722% ( 194) 00:08:06.607 8771.742 - 8822.154: 78.3597% ( 192) 00:08:06.607 8822.154 - 8872.566: 79.6741% ( 196) 00:08:06.607 8872.566 - 8922.978: 80.7068% ( 154) 00:08:06.607 8922.978 - 8973.391: 81.9675% ( 188) 00:08:06.607 8973.391 - 9023.803: 82.8594% ( 133) 00:08:06.607 9023.803 - 9074.215: 83.7446% ( 132) 00:08:06.607 9074.215 - 9124.628: 84.4957% ( 112) 00:08:06.607 9124.628 - 9175.040: 85.1596% ( 99) 00:08:06.607 9175.040 - 9225.452: 85.8101% ( 97) 00:08:06.607 9225.452 - 9275.865: 86.3466% ( 80) 00:08:06.607 9275.865 - 9326.277: 86.8629% ( 77) 00:08:06.607 9326.277 - 9376.689: 87.3525% ( 73) 00:08:06.607 9376.689 - 9427.102: 87.7884% ( 65) 00:08:06.607 9427.102 - 9477.514: 88.2242% ( 65) 00:08:06.607 9477.514 - 9527.926: 88.7004% ( 71) 00:08:06.607 9527.926 - 9578.338: 89.0893% ( 58) 00:08:06.607 9578.338 - 9628.751: 89.4716% ( 57) 00:08:06.607 9628.751 - 9679.163: 89.7935% ( 48) 00:08:06.607 9679.163 - 9729.575: 90.0952% ( 45) 00:08:06.607 9729.575 - 9779.988: 90.3366% ( 36) 00:08:06.607 9779.988 - 9830.400: 90.5848% ( 37) 00:08:06.607 9830.400 - 9880.812: 90.8731% ( 43) 00:08:06.607 9880.812 - 9931.225: 91.1078% ( 35) 00:08:06.607 9931.225 - 9981.637: 91.2621% ( 23) 00:08:06.607 9981.637 - 10032.049: 91.4498% ( 28) 00:08:06.607 10032.049 - 10082.462: 91.5638% ( 17) 00:08:06.607 10082.462 - 10132.874: 91.7382% ( 26) 00:08:06.607 10132.874 - 10183.286: 91.8790% ( 21) 00:08:06.607 10183.286 - 10233.698: 91.9863% ( 16) 00:08:06.607 10233.698 - 10284.111: 92.0936% ( 16) 00:08:06.607 10284.111 - 10334.523: 92.2546% ( 24) 00:08:06.607 10334.523 - 10384.935: 92.3686% ( 17) 00:08:06.607 10384.935 - 10435.348: 92.5161% ( 22) 00:08:06.607 10435.348 - 10485.760: 92.6703% ( 23) 00:08:06.607 10485.760 - 10536.172: 92.8246% ( 23) 00:08:06.607 10536.172 - 10586.585: 93.0928% ( 40) 00:08:06.607 10586.585 - 10636.997: 93.2672% ( 26) 00:08:06.607 10636.997 - 10687.409: 93.4348% ( 25) 00:08:06.607 10687.409 - 10737.822: 93.5488% ( 17) 00:08:06.607 10737.822 - 10788.234: 93.7098% ( 24) 00:08:06.607 10788.234 - 10838.646: 93.9780% ( 40) 00:08:06.607 10838.646 - 10889.058: 94.1322% ( 23) 00:08:06.607 10889.058 - 10939.471: 94.2597% ( 19) 00:08:06.607 10939.471 - 10989.883: 94.4005% ( 21) 00:08:06.607 10989.883 - 11040.295: 94.6017% ( 30) 00:08:06.607 11040.295 - 11090.708: 94.7626% ( 24) 00:08:06.607 11090.708 - 11141.120: 94.9571% ( 29) 00:08:06.607 11141.120 - 11191.532: 95.1650% ( 31) 00:08:06.607 11191.532 - 11241.945: 95.3393% ( 26) 00:08:06.607 11241.945 - 11292.357: 95.5606% ( 33) 00:08:06.607 11292.357 - 11342.769: 95.6880% ( 19) 00:08:06.607 11342.769 - 11393.182: 95.8758% ( 28) 00:08:06.607 11393.182 - 11443.594: 96.0099% ( 20) 00:08:06.607 11443.594 - 11494.006: 96.2178% ( 31) 00:08:06.607 11494.006 - 11544.418: 96.3720% ( 23) 00:08:06.607 11544.418 - 11594.831: 96.5531% ( 27) 00:08:06.607 11594.831 - 11645.243: 96.6872% ( 20) 00:08:06.607 11645.243 - 11695.655: 96.8012% ( 17) 00:08:06.607 11695.655 - 11746.068: 96.9085% ( 16) 00:08:06.608 11746.068 - 11796.480: 97.0158% ( 16) 00:08:06.608 11796.480 - 11846.892: 97.1499% ( 20) 00:08:06.608 11846.892 - 11897.305: 97.2304% ( 12) 00:08:06.608 11897.305 - 11947.717: 97.2908% ( 9) 00:08:06.608 11947.717 - 11998.129: 97.3712% ( 12) 00:08:06.608 11998.129 - 12048.542: 97.4249% ( 8) 00:08:06.608 12048.542 - 12098.954: 97.4987% ( 11) 00:08:06.608 12098.954 - 12149.366: 97.5389% ( 6) 00:08:06.608 12149.366 - 12199.778: 97.5858% ( 7) 00:08:06.608 12199.778 - 12250.191: 97.6328% ( 7) 00:08:06.608 12250.191 - 12300.603: 97.6730% ( 6) 00:08:06.608 12300.603 - 12351.015: 97.6998% ( 4) 00:08:06.608 12351.015 - 12401.428: 97.7200% ( 3) 00:08:06.608 12401.428 - 12451.840: 97.7401% ( 3) 00:08:06.608 12451.840 - 12502.252: 97.7736% ( 5) 00:08:06.608 12502.252 - 12552.665: 97.8474% ( 11) 00:08:06.608 12552.665 - 12603.077: 97.8943% ( 7) 00:08:06.608 12603.077 - 12653.489: 97.9144% ( 3) 00:08:06.608 12653.489 - 12703.902: 97.9413% ( 4) 00:08:06.608 12703.902 - 12754.314: 97.9748% ( 5) 00:08:06.608 12754.314 - 12804.726: 98.0150% ( 6) 00:08:06.608 12804.726 - 12855.138: 98.0351% ( 3) 00:08:06.608 12855.138 - 12905.551: 98.0821% ( 7) 00:08:06.608 12905.551 - 13006.375: 98.2363% ( 23) 00:08:06.608 13006.375 - 13107.200: 98.3034% ( 10) 00:08:06.608 13107.200 - 13208.025: 98.3771% ( 11) 00:08:06.608 13208.025 - 13308.849: 98.4777% ( 15) 00:08:06.608 13308.849 - 13409.674: 98.5783% ( 15) 00:08:06.608 13409.674 - 13510.498: 98.6722% ( 14) 00:08:06.608 13510.498 - 13611.323: 98.7795% ( 16) 00:08:06.608 13611.323 - 13712.148: 98.8600% ( 12) 00:08:06.608 13712.148 - 13812.972: 98.9337% ( 11) 00:08:06.608 13812.972 - 13913.797: 98.9874% ( 8) 00:08:06.608 13913.797 - 14014.622: 99.0209% ( 5) 00:08:06.608 14014.622 - 14115.446: 99.0545% ( 5) 00:08:06.608 14115.446 - 14216.271: 99.0813% ( 4) 00:08:06.608 14216.271 - 14317.095: 99.1215% ( 6) 00:08:06.608 14317.095 - 14417.920: 99.1416% ( 3) 00:08:06.608 18753.378 - 18854.203: 99.1617% ( 3) 00:08:06.608 18854.203 - 18955.028: 99.1819% ( 3) 00:08:06.608 18955.028 - 19055.852: 99.2154% ( 5) 00:08:06.608 19055.852 - 19156.677: 99.2489% ( 5) 00:08:06.608 19156.677 - 19257.502: 99.2825% ( 5) 00:08:06.608 19257.502 - 19358.326: 99.3093% ( 4) 00:08:06.608 19358.326 - 19459.151: 99.3428% ( 5) 00:08:06.608 19459.151 - 19559.975: 99.3763% ( 5) 00:08:06.608 19559.975 - 19660.800: 99.4099% ( 5) 00:08:06.608 19660.800 - 19761.625: 99.4367% ( 4) 00:08:06.608 19761.625 - 19862.449: 99.4702% ( 5) 00:08:06.608 19862.449 - 19963.274: 99.4970% ( 4) 00:08:06.608 19963.274 - 20064.098: 99.5373% ( 6) 00:08:06.608 20064.098 - 20164.923: 99.5641% ( 4) 00:08:06.608 20164.923 - 20265.748: 99.5708% ( 1) 00:08:06.608 23693.785 - 23794.609: 99.5909% ( 3) 00:08:06.608 23794.609 - 23895.434: 99.6178% ( 4) 00:08:06.608 23895.434 - 23996.258: 99.6379% ( 3) 00:08:06.608 23996.258 - 24097.083: 99.6714% ( 5) 00:08:06.608 24097.083 - 24197.908: 99.6982% ( 4) 00:08:06.608 24197.908 - 24298.732: 99.7183% ( 3) 00:08:06.608 24298.732 - 24399.557: 99.7653% ( 7) 00:08:06.608 24399.557 - 24500.382: 99.7854% ( 3) 00:08:06.609 24500.382 - 24601.206: 99.8055% ( 3) 00:08:06.609 24601.206 - 24702.031: 99.8391% ( 5) 00:08:06.609 24702.031 - 24802.855: 99.8592% ( 3) 00:08:06.609 24802.855 - 24903.680: 99.8860% ( 4) 00:08:06.609 24903.680 - 25004.505: 99.9128% ( 4) 00:08:06.609 25004.505 - 25105.329: 99.9464% ( 5) 00:08:06.609 25105.329 - 25206.154: 99.9732% ( 4) 00:08:06.609 25206.154 - 25306.978: 99.9933% ( 3) 00:08:06.609 25306.978 - 25407.803: 100.0000% ( 1) 00:08:06.609 00:08:06.609 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:06.609 ============================================================================== 00:08:06.609 Range in us Cumulative IO count 00:08:06.609 5116.849 - 5142.055: 0.0134% ( 2) 00:08:06.609 5142.055 - 5167.262: 0.0268% ( 2) 00:08:06.609 5167.262 - 5192.468: 0.0469% ( 3) 00:08:06.609 5192.468 - 5217.674: 0.0805% ( 5) 00:08:06.609 5217.674 - 5242.880: 0.1073% ( 4) 00:08:06.609 5242.880 - 5268.086: 0.1475% ( 6) 00:08:06.609 5268.086 - 5293.292: 0.1878% ( 6) 00:08:06.609 5293.292 - 5318.498: 0.2280% ( 6) 00:08:06.609 5318.498 - 5343.705: 0.2548% ( 4) 00:08:06.609 5343.705 - 5368.911: 0.2682% ( 2) 00:08:06.609 5368.911 - 5394.117: 0.2749% ( 1) 00:08:06.609 5394.117 - 5419.323: 0.2884% ( 2) 00:08:06.609 5419.323 - 5444.529: 0.3018% ( 2) 00:08:06.609 5444.529 - 5469.735: 0.3152% ( 2) 00:08:06.609 5469.735 - 5494.942: 0.3286% ( 2) 00:08:06.609 5494.942 - 5520.148: 0.3420% ( 2) 00:08:06.609 5520.148 - 5545.354: 0.3554% ( 2) 00:08:06.609 5545.354 - 5570.560: 0.3688% ( 2) 00:08:06.609 5570.560 - 5595.766: 0.3822% ( 2) 00:08:06.609 5595.766 - 5620.972: 0.3957% ( 2) 00:08:06.609 5620.972 - 5646.178: 0.4091% ( 2) 00:08:06.609 5646.178 - 5671.385: 0.4225% ( 2) 00:08:06.609 5671.385 - 5696.591: 0.4292% ( 1) 00:08:06.609 7108.135 - 7158.548: 0.4359% ( 1) 00:08:06.609 7158.548 - 7208.960: 0.4761% ( 6) 00:08:06.611 7208.960 - 7259.372: 0.5566% ( 12) 00:08:06.611 7259.372 - 7309.785: 0.6706% ( 17) 00:08:06.611 7309.785 - 7360.197: 0.9120% ( 36) 00:08:06.611 7360.197 - 7410.609: 1.3010% ( 58) 00:08:06.611 7410.609 - 7461.022: 1.9917% ( 103) 00:08:06.611 7461.022 - 7511.434: 3.0244% ( 154) 00:08:06.612 7511.434 - 7561.846: 4.4729% ( 216) 00:08:06.612 7561.846 - 7612.258: 6.4042% ( 288) 00:08:06.612 7612.258 - 7662.671: 8.8720% ( 368) 00:08:06.612 7662.671 - 7713.083: 11.7556% ( 430) 00:08:06.612 7713.083 - 7763.495: 15.0684% ( 494) 00:08:06.612 7763.495 - 7813.908: 18.6494% ( 534) 00:08:06.612 7813.908 - 7864.320: 22.7535% ( 612) 00:08:06.612 7864.320 - 7914.732: 26.9179% ( 621) 00:08:06.612 7914.732 - 7965.145: 31.2098% ( 640) 00:08:06.612 7965.145 - 8015.557: 35.9710% ( 710) 00:08:06.612 8015.557 - 8065.969: 40.6317% ( 695) 00:08:06.612 8065.969 - 8116.382: 44.8900% ( 635) 00:08:06.612 8116.382 - 8166.794: 48.9270% ( 602) 00:08:06.612 8166.794 - 8217.206: 53.0579% ( 616) 00:08:06.612 8217.206 - 8267.618: 57.0011% ( 588) 00:08:06.612 8267.618 - 8318.031: 60.6156% ( 539) 00:08:06.612 8318.031 - 8368.443: 63.9552% ( 498) 00:08:06.612 8368.443 - 8418.855: 66.8991% ( 439) 00:08:06.612 8418.855 - 8469.268: 69.2798% ( 355) 00:08:06.612 8469.268 - 8519.680: 71.8214% ( 379) 00:08:06.612 8519.680 - 8570.092: 73.9069% ( 311) 00:08:06.612 8570.092 - 8620.505: 75.4694% ( 233) 00:08:06.612 8620.505 - 8670.917: 76.9246% ( 217) 00:08:06.612 8670.917 - 8721.329: 78.3396% ( 211) 00:08:06.612 8721.329 - 8771.742: 79.4863% ( 171) 00:08:06.612 8771.742 - 8822.154: 80.3782% ( 133) 00:08:06.612 8822.154 - 8872.566: 81.1025% ( 108) 00:08:06.612 8872.566 - 8922.978: 81.8066% ( 105) 00:08:06.612 8922.978 - 8973.391: 82.5174% ( 106) 00:08:06.612 8973.391 - 9023.803: 83.1277% ( 91) 00:08:06.612 9023.803 - 9074.215: 83.6239% ( 74) 00:08:06.612 9074.215 - 9124.628: 84.2677% ( 96) 00:08:06.612 9124.628 - 9175.040: 85.0188% ( 112) 00:08:06.612 9175.040 - 9225.452: 85.6089% ( 88) 00:08:06.612 9225.452 - 9275.865: 86.1454% ( 80) 00:08:06.612 9275.865 - 9326.277: 86.6752% ( 79) 00:08:06.612 9326.277 - 9376.689: 87.3122% ( 95) 00:08:06.612 9376.689 - 9427.102: 87.8957% ( 87) 00:08:06.612 9427.102 - 9477.514: 88.4523% ( 83) 00:08:06.612 9477.514 - 9527.926: 88.9284% ( 71) 00:08:06.612 9527.926 - 9578.338: 89.4514% ( 78) 00:08:06.612 9578.338 - 9628.751: 89.8940% ( 66) 00:08:06.612 9628.751 - 9679.163: 90.2763% ( 57) 00:08:06.612 9679.163 - 9729.575: 90.5781% ( 45) 00:08:06.612 9729.575 - 9779.988: 90.7994% ( 33) 00:08:06.612 9779.988 - 9830.400: 91.0341% ( 35) 00:08:06.612 9830.400 - 9880.812: 91.2554% ( 33) 00:08:06.612 9880.812 - 9931.225: 91.4364% ( 27) 00:08:06.612 9931.225 - 9981.637: 91.6242% ( 28) 00:08:06.612 9981.637 - 10032.049: 91.7650% ( 21) 00:08:06.612 10032.049 - 10082.462: 91.9394% ( 26) 00:08:06.612 10082.462 - 10132.874: 92.1204% ( 27) 00:08:06.612 10132.874 - 10183.286: 92.2479% ( 19) 00:08:06.613 10183.286 - 10233.698: 92.3686% ( 18) 00:08:06.613 10233.698 - 10284.111: 92.4692% ( 15) 00:08:06.613 10284.111 - 10334.523: 92.6368% ( 25) 00:08:06.613 10334.523 - 10384.935: 92.8045% ( 25) 00:08:06.613 10384.935 - 10435.348: 92.9587% ( 23) 00:08:06.613 10435.348 - 10485.760: 93.0861% ( 19) 00:08:06.613 10485.760 - 10536.172: 93.2135% ( 19) 00:08:06.613 10536.172 - 10586.585: 93.3476% ( 20) 00:08:06.613 10586.585 - 10636.997: 93.5086% ( 24) 00:08:06.613 10636.997 - 10687.409: 93.6360% ( 19) 00:08:06.613 10687.409 - 10737.822: 93.7835% ( 22) 00:08:06.613 10737.822 - 10788.234: 93.9512% ( 25) 00:08:06.613 10788.234 - 10838.646: 94.1725% ( 33) 00:08:06.613 10838.646 - 10889.058: 94.3401% ( 25) 00:08:06.613 10889.058 - 10939.471: 94.5212% ( 27) 00:08:06.613 10939.471 - 10989.883: 94.7358% ( 32) 00:08:06.613 10989.883 - 11040.295: 94.9034% ( 25) 00:08:06.613 11040.295 - 11090.708: 95.1180% ( 32) 00:08:06.613 11090.708 - 11141.120: 95.3863% ( 40) 00:08:06.613 11141.120 - 11191.532: 95.6545% ( 40) 00:08:06.613 11191.532 - 11241.945: 95.8289% ( 26) 00:08:06.613 11241.945 - 11292.357: 95.9697% ( 21) 00:08:06.613 11292.357 - 11342.769: 96.0971% ( 19) 00:08:06.613 11342.769 - 11393.182: 96.2312% ( 20) 00:08:06.613 11393.182 - 11443.594: 96.3720% ( 21) 00:08:06.613 11443.594 - 11494.006: 96.5330% ( 24) 00:08:06.613 11494.006 - 11544.418: 96.7141% ( 27) 00:08:06.613 11544.418 - 11594.831: 96.8146% ( 15) 00:08:06.613 11594.831 - 11645.243: 96.9085% ( 14) 00:08:06.613 11645.243 - 11695.655: 96.9890% ( 12) 00:08:06.613 11695.655 - 11746.068: 97.1298% ( 21) 00:08:06.613 11746.068 - 11796.480: 97.2103% ( 12) 00:08:06.613 11796.480 - 11846.892: 97.2774% ( 10) 00:08:06.613 11846.892 - 11897.305: 97.3377% ( 9) 00:08:06.614 11897.305 - 11947.717: 97.4182% ( 12) 00:08:06.614 11947.717 - 11998.129: 97.4785% ( 9) 00:08:06.614 11998.129 - 12048.542: 97.5322% ( 8) 00:08:06.614 12048.542 - 12098.954: 97.5724% ( 6) 00:08:06.614 12098.954 - 12149.366: 97.6127% ( 6) 00:08:06.614 12149.366 - 12199.778: 97.6395% ( 4) 00:08:06.614 12199.778 - 12250.191: 97.6529% ( 2) 00:08:06.614 12250.191 - 12300.603: 97.6663% ( 2) 00:08:06.614 12300.603 - 12351.015: 97.6864% ( 3) 00:08:06.614 12351.015 - 12401.428: 97.7065% ( 3) 00:08:06.614 12401.428 - 12451.840: 97.7200% ( 2) 00:08:06.614 12451.840 - 12502.252: 97.7401% ( 3) 00:08:06.614 12502.252 - 12552.665: 97.7535% ( 2) 00:08:06.614 12552.665 - 12603.077: 97.7736% ( 3) 00:08:06.614 12603.077 - 12653.489: 97.7937% ( 3) 00:08:06.614 12653.489 - 12703.902: 97.8205% ( 4) 00:08:06.614 12703.902 - 12754.314: 97.8809% ( 9) 00:08:06.614 12754.314 - 12804.726: 97.9681% ( 13) 00:08:06.614 12804.726 - 12855.138: 98.0150% ( 7) 00:08:06.614 12855.138 - 12905.551: 98.0754% ( 9) 00:08:06.614 12905.551 - 13006.375: 98.1827% ( 16) 00:08:06.614 13006.375 - 13107.200: 98.2564% ( 11) 00:08:06.614 13107.200 - 13208.025: 98.3235% ( 10) 00:08:06.614 13208.025 - 13308.849: 98.3973% ( 11) 00:08:06.614 13308.849 - 13409.674: 98.4844% ( 13) 00:08:06.614 13409.674 - 13510.498: 98.6052% ( 18) 00:08:06.614 13510.498 - 13611.323: 98.7192% ( 17) 00:08:06.614 13611.323 - 13712.148: 98.8197% ( 15) 00:08:06.614 13712.148 - 13812.972: 98.8801% ( 9) 00:08:06.614 13812.972 - 13913.797: 98.9136% ( 5) 00:08:06.614 13913.797 - 14014.622: 98.9539% ( 6) 00:08:06.614 14014.622 - 14115.446: 98.9874% ( 5) 00:08:06.614 14115.446 - 14216.271: 99.0209% ( 5) 00:08:06.614 14216.271 - 14317.095: 99.0612% ( 6) 00:08:06.614 14317.095 - 14417.920: 99.1014% ( 6) 00:08:06.614 14417.920 - 14518.745: 99.1416% ( 6) 00:08:06.614 18450.905 - 18551.729: 99.1819% ( 6) 00:08:06.614 18551.729 - 18652.554: 99.2489% ( 10) 00:08:06.614 18652.554 - 18753.378: 99.3160% ( 10) 00:08:06.615 18753.378 - 18854.203: 99.3763% ( 9) 00:08:06.615 18854.203 - 18955.028: 99.4032% ( 4) 00:08:06.615 18955.028 - 19055.852: 99.4300% ( 4) 00:08:06.615 19055.852 - 19156.677: 99.4568% ( 4) 00:08:06.615 19156.677 - 19257.502: 99.4836% ( 4) 00:08:06.615 19257.502 - 19358.326: 99.4903% ( 1) 00:08:06.615 19358.326 - 19459.151: 99.5239% ( 5) 00:08:06.615 19459.151 - 19559.975: 99.5507% ( 4) 00:08:06.615 19559.975 - 19660.800: 99.5708% ( 3) 00:08:06.615 22685.538 - 22786.363: 99.5775% ( 1) 00:08:06.615 22786.363 - 22887.188: 99.6043% ( 4) 00:08:06.615 22887.188 - 22988.012: 99.6245% ( 3) 00:08:06.615 22988.012 - 23088.837: 99.6379% ( 2) 00:08:06.615 23088.837 - 23189.662: 99.6714% ( 5) 00:08:06.615 23189.662 - 23290.486: 99.7116% ( 6) 00:08:06.615 23492.135 - 23592.960: 99.7318% ( 3) 00:08:06.615 23592.960 - 23693.785: 99.7586% ( 4) 00:08:06.615 23693.785 - 23794.609: 99.7921% ( 5) 00:08:06.615 23794.609 - 23895.434: 99.8256% ( 5) 00:08:06.615 23895.434 - 23996.258: 99.8592% ( 5) 00:08:06.615 23996.258 - 24097.083: 99.8927% ( 5) 00:08:06.615 24097.083 - 24197.908: 99.9195% ( 4) 00:08:06.615 24197.908 - 24298.732: 99.9464% ( 4) 00:08:06.615 24298.732 - 24399.557: 99.9732% ( 4) 00:08:06.615 24399.557 - 24500.382: 100.0000% ( 4) 00:08:06.615 00:08:06.615 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:06.615 ============================================================================== 00:08:06.615 Range in us Cumulative IO count 00:08:06.615 4335.458 - 4360.665: 0.0067% ( 1) 00:08:06.615 4486.695 - 4511.902: 0.0201% ( 2) 00:08:06.615 4511.902 - 4537.108: 0.0469% ( 4) 00:08:06.615 4537.108 - 4562.314: 0.0671% ( 3) 00:08:06.615 4562.314 - 4587.520: 0.1207% ( 8) 00:08:06.615 4587.520 - 4612.726: 0.1744% ( 8) 00:08:06.615 4612.726 - 4637.932: 0.2481% ( 11) 00:08:06.615 4637.932 - 4663.138: 0.2682% ( 3) 00:08:06.615 4663.138 - 4688.345: 0.2884% ( 3) 00:08:06.615 4688.345 - 4713.551: 0.3018% ( 2) 00:08:06.615 4713.551 - 4738.757: 0.3219% ( 3) 00:08:06.616 4738.757 - 4763.963: 0.3353% ( 2) 00:08:06.616 4763.963 - 4789.169: 0.3487% ( 2) 00:08:06.616 4789.169 - 4814.375: 0.3554% ( 1) 00:08:06.616 4814.375 - 4839.582: 0.3688% ( 2) 00:08:06.616 4839.582 - 4864.788: 0.3822% ( 2) 00:08:06.616 4864.788 - 4889.994: 0.3889% ( 1) 00:08:06.616 4889.994 - 4915.200: 0.4024% ( 2) 00:08:06.616 4915.200 - 4940.406: 0.4225% ( 3) 00:08:06.616 4940.406 - 4965.612: 0.4292% ( 1) 00:08:06.616 6503.188 - 6553.600: 0.4359% ( 1) 00:08:06.616 6553.600 - 6604.012: 0.4560% ( 3) 00:08:06.616 6604.012 - 6654.425: 0.5164% ( 9) 00:08:06.616 6654.425 - 6704.837: 0.5968% ( 12) 00:08:06.616 6704.837 - 6755.249: 0.7041% ( 16) 00:08:06.616 6755.249 - 6805.662: 0.7444% ( 6) 00:08:06.616 6805.662 - 6856.074: 0.7645% ( 3) 00:08:06.616 6856.074 - 6906.486: 0.7913% ( 4) 00:08:06.616 6906.486 - 6956.898: 0.8181% ( 4) 00:08:06.616 6956.898 - 7007.311: 0.8383% ( 3) 00:08:06.616 7007.311 - 7057.723: 0.8584% ( 3) 00:08:06.616 7108.135 - 7158.548: 0.8651% ( 1) 00:08:06.616 7158.548 - 7208.960: 0.9120% ( 7) 00:08:06.616 7208.960 - 7259.372: 0.9791% ( 10) 00:08:06.616 7259.372 - 7309.785: 1.1333% ( 23) 00:08:06.616 7309.785 - 7360.197: 1.3948% ( 39) 00:08:06.616 7360.197 - 7410.609: 1.8173% ( 63) 00:08:06.616 7410.609 - 7461.022: 2.4611% ( 96) 00:08:06.616 7461.022 - 7511.434: 3.4268% ( 144) 00:08:06.616 7511.434 - 7561.846: 4.8149% ( 207) 00:08:06.616 7561.846 - 7612.258: 6.6524% ( 274) 00:08:06.616 7612.258 - 7662.671: 8.7715% ( 316) 00:08:06.617 7662.671 - 7713.083: 11.3264% ( 381) 00:08:06.617 7713.083 - 7763.495: 14.5386% ( 479) 00:08:06.617 7763.495 - 7813.908: 17.8849% ( 499) 00:08:06.617 7813.908 - 7864.320: 21.9152% ( 601) 00:08:06.617 7864.320 - 7914.732: 26.0528% ( 617) 00:08:06.617 7914.732 - 7965.145: 30.5056% ( 664) 00:08:06.617 7965.145 - 8015.557: 35.7095% ( 776) 00:08:06.618 8015.557 - 8065.969: 40.6719% ( 740) 00:08:06.618 8065.969 - 8116.382: 45.6813% ( 747) 00:08:06.618 8116.382 - 8166.794: 49.6647% ( 594) 00:08:06.618 8166.794 - 8217.206: 53.6347% ( 592) 00:08:06.618 8217.206 - 8267.618: 57.3498% ( 554) 00:08:06.618 8267.618 - 8318.031: 60.8101% ( 516) 00:08:06.618 8318.031 - 8368.443: 63.4120% ( 388) 00:08:06.618 8368.443 - 8418.855: 65.9469% ( 378) 00:08:06.618 8418.855 - 8469.268: 68.5756% ( 392) 00:08:06.618 8469.268 - 8519.680: 70.8557% ( 340) 00:08:06.618 8519.680 - 8570.092: 73.2564% ( 358) 00:08:06.618 8570.092 - 8620.505: 74.8525% ( 238) 00:08:06.618 8620.505 - 8670.917: 76.2808% ( 213) 00:08:06.618 8670.917 - 8721.329: 77.8165% ( 229) 00:08:06.618 8721.329 - 8771.742: 78.9834% ( 174) 00:08:06.618 8771.742 - 8822.154: 80.0228% ( 155) 00:08:06.618 8822.154 - 8872.566: 80.8342% ( 121) 00:08:06.618 8872.566 - 8922.978: 81.8334% ( 149) 00:08:06.618 8922.978 - 8973.391: 82.5510% ( 107) 00:08:06.618 8973.391 - 9023.803: 83.4093% ( 128) 00:08:06.618 9023.803 - 9074.215: 84.2878% ( 131) 00:08:06.618 9074.215 - 9124.628: 84.8780% ( 88) 00:08:06.618 9124.628 - 9175.040: 85.5217% ( 96) 00:08:06.618 9175.040 - 9225.452: 86.0783% ( 83) 00:08:06.618 9225.452 - 9275.865: 86.6081% ( 79) 00:08:06.618 9275.865 - 9326.277: 87.0976% ( 73) 00:08:06.618 9326.277 - 9376.689: 87.5738% ( 71) 00:08:06.618 9376.689 - 9427.102: 88.0834% ( 76) 00:08:06.618 9427.102 - 9477.514: 88.5461% ( 69) 00:08:06.618 9477.514 - 9527.926: 88.9217% ( 56) 00:08:06.618 9527.926 - 9578.338: 89.3374% ( 62) 00:08:06.618 9578.338 - 9628.751: 89.6593% ( 48) 00:08:06.618 9628.751 - 9679.163: 90.0349% ( 56) 00:08:06.618 9679.163 - 9729.575: 90.4439% ( 61) 00:08:06.618 9729.575 - 9779.988: 90.8329% ( 58) 00:08:06.618 9779.988 - 9830.400: 91.0810% ( 37) 00:08:06.618 9830.400 - 9880.812: 91.3358% ( 38) 00:08:06.618 9880.812 - 9931.225: 91.7918% ( 68) 00:08:06.618 9931.225 - 9981.637: 92.0266% ( 35) 00:08:06.618 9981.637 - 10032.049: 92.2747% ( 37) 00:08:06.618 10032.049 - 10082.462: 92.4624% ( 28) 00:08:06.618 10082.462 - 10132.874: 92.7307% ( 40) 00:08:06.618 10132.874 - 10183.286: 92.9587% ( 34) 00:08:06.618 10183.286 - 10233.698: 93.1465% ( 28) 00:08:06.618 10233.698 - 10284.111: 93.3007% ( 23) 00:08:06.618 10284.111 - 10334.523: 93.4549% ( 23) 00:08:06.618 10334.523 - 10384.935: 93.6025% ( 22) 00:08:06.618 10384.935 - 10435.348: 93.7433% ( 21) 00:08:06.618 10435.348 - 10485.760: 93.8774% ( 20) 00:08:06.618 10485.760 - 10536.172: 93.9579% ( 12) 00:08:06.618 10536.172 - 10586.585: 94.0652% ( 16) 00:08:06.618 10586.585 - 10636.997: 94.1524% ( 13) 00:08:06.618 10636.997 - 10687.409: 94.2127% ( 9) 00:08:06.618 10687.409 - 10737.822: 94.2731% ( 9) 00:08:06.618 10737.822 - 10788.234: 94.3468% ( 11) 00:08:06.618 10788.234 - 10838.646: 94.4273% ( 12) 00:08:06.618 10838.646 - 10889.058: 94.5078% ( 12) 00:08:06.618 10889.058 - 10939.471: 94.6017% ( 14) 00:08:06.618 10939.471 - 10989.883: 94.7224% ( 18) 00:08:06.618 10989.883 - 11040.295: 94.8833% ( 24) 00:08:06.618 11040.295 - 11090.708: 95.0376% ( 23) 00:08:06.618 11090.708 - 11141.120: 95.2521% ( 32) 00:08:06.619 11141.120 - 11191.532: 95.3930% ( 21) 00:08:06.619 11191.532 - 11241.945: 95.5003% ( 16) 00:08:06.619 11241.945 - 11292.357: 95.6210% ( 18) 00:08:06.619 11292.357 - 11342.769: 95.7551% ( 20) 00:08:06.619 11342.769 - 11393.182: 95.8490% ( 14) 00:08:06.619 11393.182 - 11443.594: 95.9898% ( 21) 00:08:06.619 11443.594 - 11494.006: 96.1642% ( 26) 00:08:06.619 11494.006 - 11544.418: 96.3720% ( 31) 00:08:06.619 11544.418 - 11594.831: 96.5397% ( 25) 00:08:06.619 11594.831 - 11645.243: 96.6604% ( 18) 00:08:06.619 11645.243 - 11695.655: 96.7543% ( 14) 00:08:06.619 11695.655 - 11746.068: 96.8281% ( 11) 00:08:06.619 11746.068 - 11796.480: 96.8951% ( 10) 00:08:06.619 11796.480 - 11846.892: 96.9689% ( 11) 00:08:06.619 11846.892 - 11897.305: 96.9957% ( 4) 00:08:06.619 12199.778 - 12250.191: 97.0292% ( 5) 00:08:06.619 12250.191 - 12300.603: 97.0494% ( 3) 00:08:06.619 12300.603 - 12351.015: 97.0963% ( 7) 00:08:06.619 12351.015 - 12401.428: 97.1835% ( 13) 00:08:06.619 12401.428 - 12451.840: 97.2908% ( 16) 00:08:06.619 12451.840 - 12502.252: 97.4785% ( 28) 00:08:06.619 12502.252 - 12552.665: 97.5523% ( 11) 00:08:06.619 12552.665 - 12603.077: 97.6395% ( 13) 00:08:06.619 12603.077 - 12653.489: 97.7334% ( 14) 00:08:06.619 12653.489 - 12703.902: 97.8205% ( 13) 00:08:06.619 12703.902 - 12754.314: 97.9211% ( 15) 00:08:06.619 12754.314 - 12804.726: 98.0217% ( 15) 00:08:06.619 12804.726 - 12855.138: 98.0955% ( 11) 00:08:06.619 12855.138 - 12905.551: 98.1760% ( 12) 00:08:06.619 12905.551 - 13006.375: 98.3101% ( 20) 00:08:06.619 13006.375 - 13107.200: 98.4643% ( 23) 00:08:06.619 13107.200 - 13208.025: 98.6186% ( 23) 00:08:06.619 13208.025 - 13308.849: 98.7460% ( 19) 00:08:06.619 13308.849 - 13409.674: 98.8667% ( 18) 00:08:06.619 13409.674 - 13510.498: 98.9472% ( 12) 00:08:06.619 13510.498 - 13611.323: 99.0209% ( 11) 00:08:06.619 13611.323 - 13712.148: 99.1014% ( 12) 00:08:06.619 13712.148 - 13812.972: 99.1349% ( 5) 00:08:06.619 13812.972 - 13913.797: 99.1416% ( 1) 00:08:06.619 18350.080 - 18450.905: 99.1483% ( 1) 00:08:06.619 18450.905 - 18551.729: 99.1886% ( 6) 00:08:06.619 18551.729 - 18652.554: 99.2355% ( 7) 00:08:06.619 18652.554 - 18753.378: 99.2825% ( 7) 00:08:06.619 18753.378 - 18854.203: 99.3294% ( 7) 00:08:06.619 18854.203 - 18955.028: 99.3696% ( 6) 00:08:06.619 18955.028 - 19055.852: 99.3965% ( 4) 00:08:06.619 19055.852 - 19156.677: 99.4233% ( 4) 00:08:06.619 19156.677 - 19257.502: 99.4501% ( 4) 00:08:06.619 19257.502 - 19358.326: 99.4769% ( 4) 00:08:06.619 19358.326 - 19459.151: 99.5105% ( 5) 00:08:06.619 19459.151 - 19559.975: 99.5440% ( 5) 00:08:06.619 19559.975 - 19660.800: 99.5708% ( 4) 00:08:06.619 22786.363 - 22887.188: 99.5842% ( 2) 00:08:06.619 22887.188 - 22988.012: 99.6245% ( 6) 00:08:06.620 22988.012 - 23088.837: 99.6647% ( 6) 00:08:06.620 23088.837 - 23189.662: 99.7452% ( 12) 00:08:06.620 23189.662 - 23290.486: 99.7787% ( 5) 00:08:06.620 23290.486 - 23391.311: 99.8055% ( 4) 00:08:06.620 23391.311 - 23492.135: 99.8323% ( 4) 00:08:06.620 23592.960 - 23693.785: 99.8458% ( 2) 00:08:06.620 23693.785 - 23794.609: 99.8726% ( 4) 00:08:06.620 23794.609 - 23895.434: 99.8994% ( 4) 00:08:06.620 23895.434 - 23996.258: 99.9396% ( 6) 00:08:06.620 23996.258 - 24097.083: 99.9732% ( 5) 00:08:06.620 24097.083 - 24197.908: 100.0000% ( 4) 00:08:06.620 00:08:06.620 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:06.620 ============================================================================== 00:08:06.620 Range in us Cumulative IO count 00:08:06.620 4133.809 - 4159.015: 0.0067% ( 1) 00:08:06.620 4159.015 - 4184.222: 0.0536% ( 7) 00:08:06.620 4184.222 - 4209.428: 0.0872% ( 5) 00:08:06.620 4209.428 - 4234.634: 0.1609% ( 11) 00:08:06.620 4234.634 - 4259.840: 0.2347% ( 11) 00:08:06.620 4259.840 - 4285.046: 0.2817% ( 7) 00:08:06.620 4285.046 - 4310.252: 0.3152% ( 5) 00:08:06.620 4310.252 - 4335.458: 0.3286% ( 2) 00:08:06.620 4335.458 - 4360.665: 0.3420% ( 2) 00:08:06.620 4360.665 - 4385.871: 0.3487% ( 1) 00:08:06.620 4385.871 - 4411.077: 0.3621% ( 2) 00:08:06.620 4411.077 - 4436.283: 0.3755% ( 2) 00:08:06.620 4436.283 - 4461.489: 0.3889% ( 2) 00:08:06.620 4461.489 - 4486.695: 0.3957% ( 1) 00:08:06.620 4486.695 - 4511.902: 0.4091% ( 2) 00:08:06.620 4511.902 - 4537.108: 0.4158% ( 1) 00:08:06.620 4537.108 - 4562.314: 0.4292% ( 2) 00:08:06.620 6326.745 - 6351.951: 0.4426% ( 2) 00:08:06.620 6351.951 - 6377.157: 0.4627% ( 3) 00:08:06.620 6377.157 - 6402.363: 0.4895% ( 4) 00:08:06.620 6402.363 - 6427.569: 0.5164% ( 4) 00:08:06.620 6427.569 - 6452.775: 0.5633% ( 7) 00:08:06.620 6452.775 - 6503.188: 0.6907% ( 19) 00:08:06.620 6503.188 - 6553.600: 0.7511% ( 9) 00:08:06.620 6553.600 - 6604.012: 0.7779% ( 4) 00:08:06.620 6604.012 - 6654.425: 0.8047% ( 4) 00:08:06.620 6654.425 - 6704.837: 0.8315% ( 4) 00:08:06.620 6704.837 - 6755.249: 0.8517% ( 3) 00:08:06.620 6755.249 - 6805.662: 0.8584% ( 1) 00:08:06.620 7057.723 - 7108.135: 0.8785% ( 3) 00:08:06.620 7108.135 - 7158.548: 0.9321% ( 8) 00:08:06.620 7158.548 - 7208.960: 0.9925% ( 9) 00:08:06.620 7208.960 - 7259.372: 1.0797% ( 13) 00:08:06.620 7259.372 - 7309.785: 1.2607% ( 27) 00:08:06.620 7309.785 - 7360.197: 1.5960% ( 50) 00:08:06.621 7360.197 - 7410.609: 2.0990% ( 75) 00:08:06.621 7410.609 - 7461.022: 2.7428% ( 96) 00:08:06.621 7461.022 - 7511.434: 3.5743% ( 124) 00:08:06.621 7511.434 - 7561.846: 4.9490% ( 205) 00:08:06.621 7561.846 - 7612.258: 6.6457% ( 253) 00:08:06.621 7612.258 - 7662.671: 8.7111% ( 308) 00:08:06.621 7662.671 - 7713.083: 10.9174% ( 329) 00:08:06.621 7713.083 - 7763.495: 13.8948% ( 444) 00:08:06.621 7763.495 - 7813.908: 17.6100% ( 554) 00:08:06.621 7813.908 - 7864.320: 21.6202% ( 598) 00:08:06.621 7864.320 - 7914.732: 26.1065% ( 669) 00:08:06.621 7914.732 - 7965.145: 30.9549% ( 723) 00:08:06.621 7965.145 - 8015.557: 35.5687% ( 688) 00:08:06.621 8015.557 - 8065.969: 40.6317% ( 755) 00:08:06.621 8065.969 - 8116.382: 45.1448% ( 673) 00:08:06.621 8116.382 - 8166.794: 49.1215% ( 593) 00:08:06.621 8166.794 - 8217.206: 53.0043% ( 579) 00:08:06.621 8217.206 - 8267.618: 56.5987% ( 536) 00:08:06.621 8267.618 - 8318.031: 59.7841% ( 475) 00:08:06.621 8318.031 - 8368.443: 62.8219% ( 453) 00:08:06.621 8368.443 - 8418.855: 65.7256% ( 433) 00:08:06.621 8418.855 - 8469.268: 68.6159% ( 431) 00:08:06.621 8469.268 - 8519.680: 71.2648% ( 395) 00:08:06.621 8519.680 - 8570.092: 73.3704% ( 314) 00:08:06.621 8570.092 - 8620.505: 75.1609% ( 267) 00:08:06.621 8620.505 - 8670.917: 76.6899% ( 228) 00:08:06.621 8670.917 - 8721.329: 78.0311% ( 200) 00:08:06.621 8721.329 - 8771.742: 79.1711% ( 170) 00:08:06.621 8771.742 - 8822.154: 80.0295% ( 128) 00:08:06.621 8822.154 - 8872.566: 80.8208% ( 118) 00:08:06.621 8872.566 - 8922.978: 81.5384% ( 107) 00:08:06.621 8922.978 - 8973.391: 82.5040% ( 144) 00:08:06.621 8973.391 - 9023.803: 83.3490% ( 126) 00:08:06.621 9023.803 - 9074.215: 84.2476% ( 134) 00:08:06.621 9074.215 - 9124.628: 84.9852% ( 110) 00:08:06.621 9124.628 - 9175.040: 85.7162% ( 109) 00:08:06.621 9175.040 - 9225.452: 86.3130% ( 89) 00:08:06.621 9225.452 - 9275.865: 86.8562% ( 81) 00:08:06.621 9275.865 - 9326.277: 87.2787% ( 63) 00:08:06.621 9326.277 - 9376.689: 87.7347% ( 68) 00:08:06.621 9376.689 - 9427.102: 88.1505% ( 62) 00:08:06.621 9427.102 - 9477.514: 88.5730% ( 63) 00:08:06.621 9477.514 - 9527.926: 89.0290% ( 68) 00:08:06.621 9527.926 - 9578.338: 89.6526% ( 93) 00:08:06.621 9578.338 - 9628.751: 90.1489% ( 74) 00:08:06.621 9628.751 - 9679.163: 90.5177% ( 55) 00:08:06.621 9679.163 - 9729.575: 90.8597% ( 51) 00:08:06.621 9729.575 - 9779.988: 91.2084% ( 52) 00:08:06.621 9779.988 - 9830.400: 91.5236% ( 47) 00:08:06.621 9830.400 - 9880.812: 91.8723% ( 52) 00:08:06.621 9880.812 - 9931.225: 92.2143% ( 51) 00:08:06.621 9931.225 - 9981.637: 92.4222% ( 31) 00:08:06.621 9981.637 - 10032.049: 92.5966% ( 26) 00:08:06.621 10032.049 - 10082.462: 92.7709% ( 26) 00:08:06.621 10082.462 - 10132.874: 92.9654% ( 29) 00:08:06.621 10132.874 - 10183.286: 93.2269% ( 39) 00:08:06.622 10183.286 - 10233.698: 93.3745% ( 22) 00:08:06.622 10233.698 - 10284.111: 93.5019% ( 19) 00:08:06.622 10284.111 - 10334.523: 93.6293% ( 19) 00:08:06.622 10334.523 - 10384.935: 93.7835% ( 23) 00:08:06.622 10384.935 - 10435.348: 93.8841% ( 15) 00:08:06.622 10435.348 - 10485.760: 93.9981% ( 17) 00:08:06.622 10485.760 - 10536.172: 94.0652% ( 10) 00:08:06.622 10536.172 - 10586.585: 94.1188% ( 8) 00:08:06.622 10586.585 - 10636.997: 94.1926% ( 11) 00:08:06.622 10636.997 - 10687.409: 94.2530% ( 9) 00:08:06.622 10687.409 - 10737.822: 94.3133% ( 9) 00:08:06.622 10737.822 - 10788.234: 94.3871% ( 11) 00:08:06.622 10788.234 - 10838.646: 94.4474% ( 9) 00:08:06.622 10838.646 - 10889.058: 94.5279% ( 12) 00:08:06.622 10889.058 - 10939.471: 94.5950% ( 10) 00:08:06.622 10939.471 - 10989.883: 94.6687% ( 11) 00:08:06.622 10989.883 - 11040.295: 94.8028% ( 20) 00:08:06.622 11040.295 - 11090.708: 94.9437% ( 21) 00:08:06.622 11090.708 - 11141.120: 95.0510% ( 16) 00:08:06.622 11141.120 - 11191.532: 95.1650% ( 17) 00:08:06.622 11191.532 - 11241.945: 95.3259% ( 24) 00:08:06.622 11241.945 - 11292.357: 95.4198% ( 14) 00:08:06.622 11292.357 - 11342.769: 95.5405% ( 18) 00:08:06.622 11342.769 - 11393.182: 95.6478% ( 16) 00:08:06.622 11393.182 - 11443.594: 95.7685% ( 18) 00:08:06.622 11443.594 - 11494.006: 95.8892% ( 18) 00:08:06.622 11494.006 - 11544.418: 96.0300% ( 21) 00:08:06.622 11544.418 - 11594.831: 96.1575% ( 19) 00:08:06.622 11594.831 - 11645.243: 96.2379% ( 12) 00:08:06.622 11645.243 - 11695.655: 96.3050% ( 10) 00:08:06.622 11695.655 - 11746.068: 96.3653% ( 9) 00:08:06.622 11746.068 - 11796.480: 96.4257% ( 9) 00:08:06.622 11796.480 - 11846.892: 96.4861% ( 9) 00:08:06.622 11846.892 - 11897.305: 96.5464% ( 9) 00:08:06.622 11897.305 - 11947.717: 96.6068% ( 9) 00:08:06.622 11947.717 - 11998.129: 96.6470% ( 6) 00:08:06.622 11998.129 - 12048.542: 96.6805% ( 5) 00:08:06.622 12048.542 - 12098.954: 96.7610% ( 12) 00:08:06.622 12098.954 - 12149.366: 96.9555% ( 29) 00:08:06.622 12149.366 - 12199.778: 97.0292% ( 11) 00:08:06.622 12199.778 - 12250.191: 97.0829% ( 8) 00:08:06.622 12250.191 - 12300.603: 97.1432% ( 9) 00:08:06.622 12300.603 - 12351.015: 97.2103% ( 10) 00:08:06.622 12351.015 - 12401.428: 97.2841% ( 11) 00:08:06.622 12401.428 - 12451.840: 97.3780% ( 14) 00:08:06.622 12451.840 - 12502.252: 97.4785% ( 15) 00:08:06.623 12502.252 - 12552.665: 97.5590% ( 12) 00:08:06.623 12552.665 - 12603.077: 97.6462% ( 13) 00:08:06.623 12603.077 - 12653.489: 97.7267% ( 12) 00:08:06.623 12653.489 - 12703.902: 97.7937% ( 10) 00:08:06.623 12703.902 - 12754.314: 97.8742% ( 12) 00:08:06.623 12754.314 - 12804.726: 97.9547% ( 12) 00:08:06.623 12804.726 - 12855.138: 98.0284% ( 11) 00:08:06.623 12855.138 - 12905.551: 98.1089% ( 12) 00:08:06.623 12905.551 - 13006.375: 98.2900% ( 27) 00:08:06.623 13006.375 - 13107.200: 98.5113% ( 33) 00:08:06.623 13107.200 - 13208.025: 98.7192% ( 31) 00:08:06.623 13208.025 - 13308.849: 98.9337% ( 32) 00:08:06.623 13308.849 - 13409.674: 99.0276% ( 14) 00:08:06.623 13409.674 - 13510.498: 99.1081% ( 12) 00:08:06.623 13510.498 - 13611.323: 99.1416% ( 5) 00:08:06.623 17543.483 - 17644.308: 99.1483% ( 1) 00:08:06.623 17644.308 - 17745.132: 99.1685% ( 3) 00:08:06.623 17745.132 - 17845.957: 99.2221% ( 8) 00:08:06.623 17845.957 - 17946.782: 99.2758% ( 8) 00:08:06.623 17946.782 - 18047.606: 99.3160% ( 6) 00:08:06.623 18047.606 - 18148.431: 99.3562% ( 6) 00:08:06.623 18148.431 - 18249.255: 99.3898% ( 5) 00:08:06.623 18249.255 - 18350.080: 99.4233% ( 5) 00:08:06.623 18350.080 - 18450.905: 99.4568% ( 5) 00:08:06.623 18450.905 - 18551.729: 99.4903% ( 5) 00:08:06.623 18551.729 - 18652.554: 99.5172% ( 4) 00:08:06.623 18652.554 - 18753.378: 99.5574% ( 6) 00:08:06.623 18753.378 - 18854.203: 99.5708% ( 2) 00:08:06.623 22282.240 - 22383.065: 99.5842% ( 2) 00:08:06.623 22383.065 - 22483.889: 99.5976% ( 2) 00:08:06.623 22483.889 - 22584.714: 99.6379% ( 6) 00:08:06.623 22584.714 - 22685.538: 99.7720% ( 20) 00:08:06.623 22685.538 - 22786.363: 99.8055% ( 5) 00:08:06.623 22988.012 - 23088.837: 99.8391% ( 5) 00:08:06.623 23088.837 - 23189.662: 99.8592% ( 3) 00:08:06.623 23189.662 - 23290.486: 99.8927% ( 5) 00:08:06.623 23290.486 - 23391.311: 99.9195% ( 4) 00:08:06.623 23391.311 - 23492.135: 99.9464% ( 4) 00:08:06.623 23492.135 - 23592.960: 99.9799% ( 5) 00:08:06.623 23592.960 - 23693.785: 100.0000% ( 3) 00:08:06.623 00:08:06.623 ************************************ 00:08:06.623 END TEST nvme_perf 00:08:06.623 ************************************ 00:08:06.623 23:41:54 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:06.623 00:08:06.623 real 0m2.439s 00:08:06.623 user 0m2.181s 00:08:06.623 sys 0m0.162s 00:08:06.623 23:41:54 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.623 23:41:54 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.623 23:41:54 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:06.623 23:41:54 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:06.623 23:41:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.623 23:41:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.623 ************************************ 00:08:06.623 START TEST nvme_hello_world 00:08:06.623 ************************************ 00:08:06.623 23:41:54 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:06.624 Initializing NVMe Controllers 00:08:06.624 Attached to 0000:00:13.0 00:08:06.624 Namespace ID: 1 size: 1GB 00:08:06.624 Attached to 0000:00:11.0 00:08:06.624 Namespace ID: 1 size: 5GB 00:08:06.624 Attached to 0000:00:10.0 00:08:06.624 Namespace ID: 1 size: 6GB 00:08:06.624 Attached to 0000:00:12.0 00:08:06.624 Namespace ID: 1 size: 4GB 00:08:06.624 Namespace ID: 2 size: 4GB 00:08:06.624 Namespace ID: 3 size: 4GB 00:08:06.624 Initialization complete. 00:08:06.624 INFO: using host memory buffer for IO 00:08:06.624 Hello world! 00:08:06.624 INFO: using host memory buffer for IO 00:08:06.624 Hello world! 00:08:06.624 INFO: using host memory buffer for IO 00:08:06.624 Hello world! 00:08:06.624 INFO: using host memory buffer for IO 00:08:06.624 Hello world! 00:08:06.624 INFO: using host memory buffer for IO 00:08:06.624 Hello world! 00:08:06.624 INFO: using host memory buffer for IO 00:08:06.624 Hello world! 00:08:06.624 ************************************ 00:08:06.624 END TEST nvme_hello_world 00:08:06.624 ************************************ 00:08:06.624 00:08:06.624 real 0m0.183s 00:08:06.624 user 0m0.070s 00:08:06.624 sys 0m0.074s 00:08:06.624 23:41:54 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.624 23:41:54 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:06.624 23:41:54 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:06.624 23:41:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.624 23:41:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.624 23:41:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.624 ************************************ 00:08:06.624 START TEST nvme_sgl 00:08:06.624 ************************************ 00:08:06.624 23:41:54 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:06.885 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:06.885 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:06.886 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:06.886 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:06.886 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:06.886 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:06.886 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:06.886 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:06.886 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:06.886 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:06.886 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:06.886 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:06.886 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:06.886 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:06.886 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:06.886 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:06.886 NVMe Readv/Writev Request test 00:08:06.886 Attached to 0000:00:13.0 00:08:06.886 Attached to 0000:00:11.0 00:08:06.886 Attached to 0000:00:10.0 00:08:06.886 Attached to 0000:00:12.0 00:08:06.886 0000:00:11.0: build_io_request_2 test passed 00:08:06.886 0000:00:11.0: build_io_request_4 test passed 00:08:06.886 0000:00:11.0: build_io_request_5 test passed 00:08:06.886 0000:00:11.0: build_io_request_6 test passed 00:08:06.886 0000:00:11.0: build_io_request_7 test passed 00:08:06.886 0000:00:11.0: build_io_request_10 test passed 00:08:06.886 0000:00:10.0: build_io_request_2 test passed 00:08:06.886 0000:00:10.0: build_io_request_4 test passed 00:08:06.886 0000:00:10.0: build_io_request_5 test passed 00:08:06.886 0000:00:10.0: build_io_request_6 test passed 00:08:06.886 0000:00:10.0: build_io_request_7 test passed 00:08:06.886 0000:00:10.0: build_io_request_10 test passed 00:08:06.886 Cleaning up... 00:08:06.886 ************************************ 00:08:06.886 END TEST nvme_sgl 00:08:06.886 ************************************ 00:08:06.886 00:08:06.886 real 0m0.238s 00:08:06.886 user 0m0.121s 00:08:06.886 sys 0m0.076s 00:08:06.886 23:41:54 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.886 23:41:54 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:06.886 23:41:54 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:06.886 23:41:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.886 23:41:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.886 23:41:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.886 ************************************ 00:08:06.886 START TEST nvme_e2edp 00:08:06.886 ************************************ 00:08:06.886 23:41:54 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:07.144 NVMe Write/Read with End-to-End data protection test 00:08:07.144 Attached to 0000:00:13.0 00:08:07.144 Attached to 0000:00:11.0 00:08:07.144 Attached to 0000:00:10.0 00:08:07.144 Attached to 0000:00:12.0 00:08:07.144 Cleaning up... 00:08:07.144 ************************************ 00:08:07.144 END TEST nvme_e2edp 00:08:07.144 ************************************ 00:08:07.144 00:08:07.144 real 0m0.176s 00:08:07.144 user 0m0.063s 00:08:07.144 sys 0m0.074s 00:08:07.144 23:41:55 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.144 23:41:55 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:07.144 23:41:55 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:07.144 23:41:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.144 23:41:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.144 23:41:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.144 ************************************ 00:08:07.144 START TEST nvme_reserve 00:08:07.144 ************************************ 00:08:07.144 23:41:55 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:07.144 ===================================================== 00:08:07.144 NVMe Controller at PCI bus 0, device 19, function 0 00:08:07.144 ===================================================== 00:08:07.144 Reservations: Not Supported 00:08:07.144 ===================================================== 00:08:07.144 NVMe Controller at PCI bus 0, device 17, function 0 00:08:07.144 ===================================================== 00:08:07.144 Reservations: Not Supported 00:08:07.144 ===================================================== 00:08:07.144 NVMe Controller at PCI bus 0, device 16, function 0 00:08:07.144 ===================================================== 00:08:07.144 Reservations: Not Supported 00:08:07.144 ===================================================== 00:08:07.144 NVMe Controller at PCI bus 0, device 18, function 0 00:08:07.144 ===================================================== 00:08:07.144 Reservations: Not Supported 00:08:07.144 Reservation test passed 00:08:07.144 ************************************ 00:08:07.144 END TEST nvme_reserve 00:08:07.144 ************************************ 00:08:07.144 00:08:07.144 real 0m0.195s 00:08:07.144 user 0m0.057s 00:08:07.144 sys 0m0.088s 00:08:07.144 23:41:55 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.144 23:41:55 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:07.403 23:41:55 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:07.403 23:41:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.403 23:41:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.403 23:41:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.403 ************************************ 00:08:07.403 START TEST nvme_err_injection 00:08:07.403 ************************************ 00:08:07.403 23:41:55 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:07.403 NVMe Error Injection test 00:08:07.403 Attached to 0000:00:13.0 00:08:07.403 Attached to 0000:00:11.0 00:08:07.403 Attached to 0000:00:10.0 00:08:07.403 Attached to 0000:00:12.0 00:08:07.403 0000:00:13.0: get features failed as expected 00:08:07.403 0000:00:11.0: get features failed as expected 00:08:07.403 0000:00:10.0: get features failed as expected 00:08:07.403 0000:00:12.0: get features failed as expected 00:08:07.403 0000:00:13.0: get features successfully as expected 00:08:07.403 0000:00:11.0: get features successfully as expected 00:08:07.403 0000:00:10.0: get features successfully as expected 00:08:07.403 0000:00:12.0: get features successfully as expected 00:08:07.403 0000:00:13.0: read failed as expected 00:08:07.403 0000:00:11.0: read failed as expected 00:08:07.403 0000:00:10.0: read failed as expected 00:08:07.403 0000:00:12.0: read failed as expected 00:08:07.403 0000:00:13.0: read successfully as expected 00:08:07.403 0000:00:11.0: read successfully as expected 00:08:07.403 0000:00:10.0: read successfully as expected 00:08:07.403 0000:00:12.0: read successfully as expected 00:08:07.403 Cleaning up... 00:08:07.403 ************************************ 00:08:07.403 END TEST nvme_err_injection 00:08:07.403 ************************************ 00:08:07.403 00:08:07.403 real 0m0.193s 00:08:07.403 user 0m0.070s 00:08:07.403 sys 0m0.074s 00:08:07.403 23:41:55 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.403 23:41:55 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:07.662 23:41:55 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:07.662 23:41:55 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:07.662 23:41:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.662 23:41:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.662 ************************************ 00:08:07.662 START TEST nvme_overhead 00:08:07.662 ************************************ 00:08:07.662 23:41:55 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:08.594 Initializing NVMe Controllers 00:08:08.594 Attached to 0000:00:13.0 00:08:08.594 Attached to 0000:00:11.0 00:08:08.594 Attached to 0000:00:10.0 00:08:08.594 Attached to 0000:00:12.0 00:08:08.594 Initialization complete. Launching workers. 00:08:08.594 submit (in ns) avg, min, max = 12833.3, 11328.5, 155910.0 00:08:08.594 complete (in ns) avg, min, max = 8034.6, 7247.7, 74129.2 00:08:08.594 00:08:08.594 Submit histogram 00:08:08.594 ================ 00:08:08.594 Range in us Cumulative Count 00:08:08.594 11.323 - 11.372: 0.0205% ( 1) 00:08:08.594 11.422 - 11.471: 0.0819% ( 3) 00:08:08.594 11.471 - 11.520: 0.1433% ( 3) 00:08:08.594 11.520 - 11.569: 0.3684% ( 11) 00:08:08.594 11.569 - 11.618: 0.7777% ( 20) 00:08:08.594 11.618 - 11.668: 1.7806% ( 49) 00:08:08.594 11.668 - 11.717: 3.4384% ( 81) 00:08:08.594 11.717 - 11.766: 5.5874% ( 105) 00:08:08.594 11.766 - 11.815: 8.8621% ( 160) 00:08:08.594 11.815 - 11.865: 12.8121% ( 193) 00:08:08.594 11.865 - 11.914: 18.6860% ( 287) 00:08:08.594 11.914 - 11.963: 25.5628% ( 336) 00:08:08.594 11.963 - 12.012: 32.6648% ( 347) 00:08:08.594 12.012 - 12.062: 40.9128% ( 403) 00:08:08.594 12.062 - 12.111: 48.5878% ( 375) 00:08:08.594 12.111 - 12.160: 54.6459% ( 296) 00:08:08.594 12.160 - 12.209: 60.2743% ( 275) 00:08:08.594 12.209 - 12.258: 64.3471% ( 199) 00:08:08.594 12.258 - 12.308: 67.4580% ( 152) 00:08:08.594 12.308 - 12.357: 69.8936% ( 119) 00:08:08.594 12.357 - 12.406: 71.8174% ( 94) 00:08:08.594 12.406 - 12.455: 73.5776% ( 86) 00:08:08.594 12.455 - 12.505: 74.9693% ( 68) 00:08:08.594 12.505 - 12.554: 76.1359% ( 57) 00:08:08.594 12.554 - 12.603: 77.0774% ( 46) 00:08:08.594 12.603 - 12.702: 78.8170% ( 85) 00:08:08.594 12.702 - 12.800: 80.0450% ( 60) 00:08:08.594 12.800 - 12.898: 81.7438% ( 83) 00:08:08.594 12.898 - 12.997: 83.1150% ( 67) 00:08:08.594 12.997 - 13.095: 84.3840% ( 62) 00:08:08.594 13.095 - 13.194: 85.1208% ( 36) 00:08:08.594 13.194 - 13.292: 85.7348% ( 30) 00:08:08.594 13.292 - 13.391: 86.0827% ( 17) 00:08:08.594 13.391 - 13.489: 86.3897% ( 15) 00:08:08.594 13.489 - 13.588: 86.5944% ( 10) 00:08:08.594 13.588 - 13.686: 86.6967% ( 5) 00:08:08.594 13.686 - 13.785: 86.7581% ( 3) 00:08:08.594 13.785 - 13.883: 86.8604% ( 5) 00:08:08.594 13.883 - 13.982: 87.0037% ( 7) 00:08:08.594 13.982 - 14.080: 87.1060% ( 5) 00:08:08.594 14.080 - 14.178: 87.3312% ( 11) 00:08:08.594 14.178 - 14.277: 87.3721% ( 2) 00:08:08.594 14.277 - 14.375: 87.5563% ( 9) 00:08:08.594 14.375 - 14.474: 87.6995% ( 7) 00:08:08.594 14.474 - 14.572: 87.9656% ( 13) 00:08:08.594 14.572 - 14.671: 88.1089% ( 7) 00:08:08.594 14.671 - 14.769: 88.1907% ( 4) 00:08:08.594 14.769 - 14.868: 88.3340% ( 7) 00:08:08.594 14.868 - 14.966: 88.5387% ( 10) 00:08:08.595 14.966 - 15.065: 88.7229% ( 9) 00:08:08.595 15.065 - 15.163: 88.9480% ( 11) 00:08:08.595 15.163 - 15.262: 89.1322% ( 9) 00:08:08.595 15.262 - 15.360: 89.2345% ( 5) 00:08:08.595 15.360 - 15.458: 89.3573% ( 6) 00:08:08.595 15.458 - 15.557: 89.4392% ( 4) 00:08:08.595 15.557 - 15.655: 89.6029% ( 8) 00:08:08.595 15.655 - 15.754: 89.7462% ( 7) 00:08:08.595 15.754 - 15.852: 89.8690% ( 6) 00:08:08.595 15.852 - 15.951: 90.0532% ( 9) 00:08:08.595 15.951 - 16.049: 90.3193% ( 13) 00:08:08.595 16.049 - 16.148: 90.6467% ( 16) 00:08:08.595 16.148 - 16.246: 90.9537% ( 15) 00:08:08.595 16.246 - 16.345: 91.2812% ( 16) 00:08:08.595 16.345 - 16.443: 91.8952% ( 30) 00:08:08.595 16.443 - 16.542: 92.2022% ( 15) 00:08:08.595 16.542 - 16.640: 92.7139% ( 25) 00:08:08.595 16.640 - 16.738: 93.2665% ( 27) 00:08:08.595 16.738 - 16.837: 93.8395% ( 28) 00:08:08.595 16.837 - 16.935: 94.1261% ( 14) 00:08:08.595 16.935 - 17.034: 94.4331% ( 15) 00:08:08.595 17.034 - 17.132: 94.8219% ( 19) 00:08:08.595 17.132 - 17.231: 95.1085% ( 14) 00:08:08.595 17.231 - 17.329: 95.4564% ( 17) 00:08:08.595 17.329 - 17.428: 95.7020% ( 12) 00:08:08.595 17.428 - 17.526: 95.9271% ( 11) 00:08:08.595 17.526 - 17.625: 96.4183% ( 24) 00:08:08.595 17.625 - 17.723: 96.7663% ( 17) 00:08:08.595 17.723 - 17.822: 97.2165% ( 22) 00:08:08.595 17.822 - 17.920: 97.5235% ( 15) 00:08:08.595 17.920 - 18.018: 97.7896% ( 13) 00:08:08.595 18.018 - 18.117: 97.9329% ( 7) 00:08:08.595 18.117 - 18.215: 98.0761% ( 7) 00:08:08.595 18.215 - 18.314: 98.1785% ( 5) 00:08:08.595 18.314 - 18.412: 98.3217% ( 7) 00:08:08.595 18.412 - 18.511: 98.3422% ( 1) 00:08:08.595 18.511 - 18.609: 98.3831% ( 2) 00:08:08.595 18.609 - 18.708: 98.4650% ( 4) 00:08:08.595 18.708 - 18.806: 98.6492% ( 9) 00:08:08.595 18.806 - 18.905: 98.7106% ( 3) 00:08:08.595 18.905 - 19.003: 98.7925% ( 4) 00:08:08.595 19.003 - 19.102: 98.9562% ( 8) 00:08:08.595 19.102 - 19.200: 99.0995% ( 7) 00:08:08.595 19.200 - 19.298: 99.2427% ( 7) 00:08:08.595 19.298 - 19.397: 99.3041% ( 3) 00:08:08.595 19.397 - 19.495: 99.3655% ( 3) 00:08:08.595 19.594 - 19.692: 99.3860% ( 1) 00:08:08.595 19.692 - 19.791: 99.4065% ( 1) 00:08:08.595 19.889 - 19.988: 99.4474% ( 2) 00:08:08.595 19.988 - 20.086: 99.4679% ( 1) 00:08:08.595 20.185 - 20.283: 99.5088% ( 2) 00:08:08.595 20.480 - 20.578: 99.5293% ( 1) 00:08:08.595 21.071 - 21.169: 99.5497% ( 1) 00:08:08.595 21.760 - 21.858: 99.5702% ( 1) 00:08:08.595 21.858 - 21.957: 99.5907% ( 1) 00:08:08.595 21.957 - 22.055: 99.6521% ( 3) 00:08:08.595 22.154 - 22.252: 99.7339% ( 4) 00:08:08.595 23.040 - 23.138: 99.7544% ( 1) 00:08:08.595 23.237 - 23.335: 99.7749% ( 1) 00:08:08.595 23.335 - 23.434: 99.7953% ( 1) 00:08:08.595 24.615 - 24.714: 99.8158% ( 1) 00:08:08.595 24.911 - 25.009: 99.8363% ( 1) 00:08:08.595 25.403 - 25.600: 99.8567% ( 1) 00:08:08.595 27.372 - 27.569: 99.8772% ( 1) 00:08:08.595 29.932 - 30.129: 99.8977% ( 1) 00:08:08.595 34.462 - 34.658: 99.9181% ( 1) 00:08:08.595 44.111 - 44.308: 99.9386% ( 1) 00:08:08.595 47.655 - 47.852: 99.9591% ( 1) 00:08:08.595 52.382 - 52.775: 99.9795% ( 1) 00:08:08.595 155.175 - 155.963: 100.0000% ( 1) 00:08:08.595 00:08:08.595 Complete histogram 00:08:08.595 ================== 00:08:08.595 Range in us Cumulative Count 00:08:08.595 7.237 - 7.286: 0.1842% ( 9) 00:08:08.595 7.286 - 7.335: 1.3303% ( 56) 00:08:08.595 7.335 - 7.385: 6.5698% ( 256) 00:08:08.595 7.385 - 7.434: 15.0430% ( 414) 00:08:08.595 7.434 - 7.483: 22.6361% ( 371) 00:08:08.595 7.483 - 7.532: 27.7937% ( 252) 00:08:08.595 7.532 - 7.582: 30.9660% ( 155) 00:08:08.595 7.582 - 7.631: 33.7086% ( 134) 00:08:08.595 7.631 - 7.680: 35.8371% ( 104) 00:08:08.595 7.680 - 7.729: 37.0856% ( 61) 00:08:08.595 7.729 - 7.778: 37.8019% ( 35) 00:08:08.595 7.778 - 7.828: 38.3545% ( 27) 00:08:08.595 7.828 - 7.877: 39.3369% ( 48) 00:08:08.595 7.877 - 7.926: 43.7986% ( 218) 00:08:08.595 7.926 - 7.975: 53.1928% ( 459) 00:08:08.595 7.975 - 8.025: 61.8093% ( 421) 00:08:08.595 8.025 - 8.074: 68.9521% ( 349) 00:08:08.595 8.074 - 8.123: 76.1359% ( 351) 00:08:08.595 8.123 - 8.172: 82.8080% ( 326) 00:08:08.595 8.172 - 8.222: 87.0446% ( 207) 00:08:08.595 8.222 - 8.271: 90.1351% ( 151) 00:08:08.595 8.271 - 8.320: 92.7548% ( 128) 00:08:08.595 8.320 - 8.369: 94.6991% ( 95) 00:08:08.595 8.369 - 8.418: 95.6815% ( 48) 00:08:08.595 8.418 - 8.468: 96.5616% ( 43) 00:08:08.595 8.468 - 8.517: 97.1347% ( 28) 00:08:08.595 8.517 - 8.566: 97.5235% ( 19) 00:08:08.595 8.566 - 8.615: 97.8305% ( 15) 00:08:08.595 8.615 - 8.665: 97.8919% ( 3) 00:08:08.595 8.665 - 8.714: 97.9124% ( 1) 00:08:08.595 8.714 - 8.763: 97.9738% ( 3) 00:08:08.595 8.763 - 8.812: 98.0557% ( 4) 00:08:08.595 8.812 - 8.862: 98.0966% ( 2) 00:08:08.595 8.862 - 8.911: 98.1171% ( 1) 00:08:08.595 8.911 - 8.960: 98.1375% ( 1) 00:08:08.595 9.058 - 9.108: 98.1580% ( 1) 00:08:08.595 9.108 - 9.157: 98.1989% ( 2) 00:08:08.595 9.157 - 9.206: 98.2194% ( 1) 00:08:08.595 9.255 - 9.305: 98.2603% ( 2) 00:08:08.595 9.403 - 9.452: 98.2808% ( 1) 00:08:08.595 9.452 - 9.502: 98.3013% ( 1) 00:08:08.595 9.649 - 9.698: 98.3217% ( 1) 00:08:08.595 9.748 - 9.797: 98.3422% ( 1) 00:08:08.595 9.846 - 9.895: 98.3627% ( 1) 00:08:08.595 11.766 - 11.815: 98.4036% ( 2) 00:08:08.595 11.963 - 12.012: 98.4650% ( 3) 00:08:08.595 12.702 - 12.800: 98.4855% ( 1) 00:08:08.595 12.898 - 12.997: 98.5059% ( 1) 00:08:08.595 12.997 - 13.095: 98.5264% ( 1) 00:08:08.595 13.194 - 13.292: 98.5878% ( 3) 00:08:08.595 13.292 - 13.391: 98.6901% ( 5) 00:08:08.595 13.391 - 13.489: 98.7311% ( 2) 00:08:08.595 13.489 - 13.588: 98.7720% ( 2) 00:08:08.595 13.588 - 13.686: 98.8743% ( 5) 00:08:08.595 13.686 - 13.785: 99.0790% ( 10) 00:08:08.595 13.785 - 13.883: 99.2223% ( 7) 00:08:08.595 13.883 - 13.982: 99.3451% ( 6) 00:08:08.595 13.982 - 14.080: 99.4474% ( 5) 00:08:08.595 14.080 - 14.178: 99.4679% ( 1) 00:08:08.595 14.178 - 14.277: 99.5088% ( 2) 00:08:08.595 14.277 - 14.375: 99.5293% ( 1) 00:08:08.595 14.474 - 14.572: 99.5702% ( 2) 00:08:08.595 14.769 - 14.868: 99.6111% ( 2) 00:08:08.595 14.868 - 14.966: 99.6316% ( 1) 00:08:08.595 16.443 - 16.542: 99.6521% ( 1) 00:08:08.595 18.215 - 18.314: 99.6725% ( 1) 00:08:08.595 18.412 - 18.511: 99.6930% ( 1) 00:08:08.595 19.298 - 19.397: 99.7135% ( 1) 00:08:08.595 22.942 - 23.040: 99.7339% ( 1) 00:08:08.595 23.631 - 23.729: 99.7544% ( 1) 00:08:08.595 23.926 - 24.025: 99.7749% ( 1) 00:08:08.595 31.508 - 31.705: 99.7953% ( 1) 00:08:08.595 33.280 - 33.477: 99.8158% ( 1) 00:08:08.595 34.462 - 34.658: 99.8567% ( 2) 00:08:08.595 37.022 - 37.218: 99.8772% ( 1) 00:08:08.595 45.292 - 45.489: 99.8977% ( 1) 00:08:08.595 49.822 - 50.018: 99.9386% ( 2) 00:08:08.595 51.200 - 51.594: 99.9591% ( 1) 00:08:08.595 53.957 - 54.351: 99.9795% ( 1) 00:08:08.595 74.043 - 74.437: 100.0000% ( 1) 00:08:08.595 00:08:08.852 ************************************ 00:08:08.852 END TEST nvme_overhead 00:08:08.852 ************************************ 00:08:08.852 00:08:08.852 real 0m1.186s 00:08:08.852 user 0m1.069s 00:08:08.852 sys 0m0.071s 00:08:08.852 23:41:56 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.852 23:41:56 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:08.853 23:41:56 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:08.853 23:41:56 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:08.853 23:41:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.853 23:41:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.853 ************************************ 00:08:08.853 START TEST nvme_arbitration 00:08:08.853 ************************************ 00:08:08.853 23:41:56 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:12.143 Initializing NVMe Controllers 00:08:12.143 Attached to 0000:00:13.0 00:08:12.143 Attached to 0000:00:11.0 00:08:12.143 Attached to 0000:00:10.0 00:08:12.143 Attached to 0000:00:12.0 00:08:12.143 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:12.143 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:12.143 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:08:12.143 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:12.143 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:12.143 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:12.143 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:12.143 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:12.143 Initialization complete. Launching workers. 00:08:12.143 Starting thread on core 1 with urgent priority queue 00:08:12.143 Starting thread on core 2 with urgent priority queue 00:08:12.143 Starting thread on core 3 with urgent priority queue 00:08:12.143 Starting thread on core 0 with urgent priority queue 00:08:12.143 QEMU NVMe Ctrl (12343 ) core 0: 6336.00 IO/s 15.78 secs/100000 ios 00:08:12.143 QEMU NVMe Ctrl (12342 ) core 0: 6336.00 IO/s 15.78 secs/100000 ios 00:08:12.143 QEMU NVMe Ctrl (12341 ) core 1: 6293.33 IO/s 15.89 secs/100000 ios 00:08:12.143 QEMU NVMe Ctrl (12342 ) core 1: 6293.33 IO/s 15.89 secs/100000 ios 00:08:12.143 QEMU NVMe Ctrl (12340 ) core 2: 6016.00 IO/s 16.62 secs/100000 ios 00:08:12.143 QEMU NVMe Ctrl (12342 ) core 3: 5973.33 IO/s 16.74 secs/100000 ios 00:08:12.143 ======================================================== 00:08:12.143 00:08:12.143 ************************************ 00:08:12.143 END TEST nvme_arbitration 00:08:12.143 ************************************ 00:08:12.143 00:08:12.143 real 0m3.206s 00:08:12.143 user 0m9.016s 00:08:12.143 sys 0m0.103s 00:08:12.143 23:41:59 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.143 23:41:59 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 23:42:00 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:12.143 23:42:00 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.143 23:42:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.143 23:42:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 ************************************ 00:08:12.143 START TEST nvme_single_aen 00:08:12.143 ************************************ 00:08:12.143 23:42:00 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:12.143 Asynchronous Event Request test 00:08:12.143 Attached to 0000:00:13.0 00:08:12.143 Attached to 0000:00:11.0 00:08:12.143 Attached to 0000:00:10.0 00:08:12.143 Attached to 0000:00:12.0 00:08:12.143 Reset controller to setup AER completions for this process 00:08:12.143 Registering asynchronous event callbacks... 00:08:12.143 Getting orig temperature thresholds of all controllers 00:08:12.143 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.143 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.143 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.143 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.143 Setting all controllers temperature threshold low to trigger AER 00:08:12.143 Waiting for all controllers temperature threshold to be set lower 00:08:12.143 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.143 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:12.143 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.143 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:12.143 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.143 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:12.143 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.143 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:12.143 Waiting for all controllers to trigger AER and reset threshold 00:08:12.143 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.143 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.143 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.143 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.143 Cleaning up... 00:08:12.143 00:08:12.143 real 0m0.201s 00:08:12.143 user 0m0.069s 00:08:12.143 sys 0m0.088s 00:08:12.143 23:42:00 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.143 ************************************ 00:08:12.143 END TEST nvme_single_aen 00:08:12.143 ************************************ 00:08:12.143 23:42:00 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:12.404 23:42:00 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:12.404 23:42:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.404 23:42:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.404 23:42:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.404 ************************************ 00:08:12.404 START TEST nvme_doorbell_aers 00:08:12.404 ************************************ 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:12.404 23:42:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:12.405 23:42:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:12.405 23:42:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:12.405 23:42:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:12.665 [2024-11-26 23:42:00.553816] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:22.650 Executing: test_write_invalid_db 00:08:22.650 Waiting for AER completion... 00:08:22.650 Failure: test_write_invalid_db 00:08:22.650 00:08:22.650 Executing: test_invalid_db_write_overflow_sq 00:08:22.650 Waiting for AER completion... 00:08:22.651 Failure: test_invalid_db_write_overflow_sq 00:08:22.651 00:08:22.651 Executing: test_invalid_db_write_overflow_cq 00:08:22.651 Waiting for AER completion... 00:08:22.651 Failure: test_invalid_db_write_overflow_cq 00:08:22.651 00:08:22.651 23:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:22.651 23:42:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:22.651 [2024-11-26 23:42:10.555639] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:32.645 Executing: test_write_invalid_db 00:08:32.645 Waiting for AER completion... 00:08:32.645 Failure: test_write_invalid_db 00:08:32.645 00:08:32.645 Executing: test_invalid_db_write_overflow_sq 00:08:32.645 Waiting for AER completion... 00:08:32.645 Failure: test_invalid_db_write_overflow_sq 00:08:32.645 00:08:32.645 Executing: test_invalid_db_write_overflow_cq 00:08:32.645 Waiting for AER completion... 00:08:32.645 Failure: test_invalid_db_write_overflow_cq 00:08:32.645 00:08:32.645 23:42:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:32.645 23:42:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:32.645 [2024-11-26 23:42:20.585572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:42.640 Executing: test_write_invalid_db 00:08:42.640 Waiting for AER completion... 00:08:42.640 Failure: test_write_invalid_db 00:08:42.640 00:08:42.640 Executing: test_invalid_db_write_overflow_sq 00:08:42.640 Waiting for AER completion... 00:08:42.640 Failure: test_invalid_db_write_overflow_sq 00:08:42.640 00:08:42.640 Executing: test_invalid_db_write_overflow_cq 00:08:42.640 Waiting for AER completion... 00:08:42.640 Failure: test_invalid_db_write_overflow_cq 00:08:42.640 00:08:42.640 23:42:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:42.640 23:42:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:42.640 [2024-11-26 23:42:30.618676] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 Executing: test_write_invalid_db 00:08:52.635 Waiting for AER completion... 00:08:52.635 Failure: test_write_invalid_db 00:08:52.635 00:08:52.635 Executing: test_invalid_db_write_overflow_sq 00:08:52.635 Waiting for AER completion... 00:08:52.635 Failure: test_invalid_db_write_overflow_sq 00:08:52.635 00:08:52.635 Executing: test_invalid_db_write_overflow_cq 00:08:52.635 Waiting for AER completion... 00:08:52.635 Failure: test_invalid_db_write_overflow_cq 00:08:52.635 00:08:52.635 00:08:52.635 real 0m40.177s 00:08:52.635 user 0m34.294s 00:08:52.635 sys 0m5.535s 00:08:52.635 23:42:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.635 23:42:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:52.635 ************************************ 00:08:52.635 END TEST nvme_doorbell_aers 00:08:52.635 ************************************ 00:08:52.635 23:42:40 nvme -- nvme/nvme.sh@97 -- # uname 00:08:52.635 23:42:40 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:52.635 23:42:40 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:52.635 23:42:40 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:52.635 23:42:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.635 23:42:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.635 ************************************ 00:08:52.635 START TEST nvme_multi_aen 00:08:52.635 ************************************ 00:08:52.635 23:42:40 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:52.635 [2024-11-26 23:42:40.657479] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.657541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.657552] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.658579] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.658599] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.658607] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.659471] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.659493] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.659500] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.660349] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.660370] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 [2024-11-26 23:42:40.660377] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74789) is not found. Dropping the request. 00:08:52.635 Child process pid: 75311 00:08:52.893 [Child] Asynchronous Event Request test 00:08:52.893 [Child] Attached to 0000:00:13.0 00:08:52.893 [Child] Attached to 0000:00:11.0 00:08:52.893 [Child] Attached to 0000:00:10.0 00:08:52.893 [Child] Attached to 0000:00:12.0 00:08:52.893 [Child] Registering asynchronous event callbacks... 00:08:52.893 [Child] Getting orig temperature thresholds of all controllers 00:08:52.893 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.893 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.893 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.893 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.893 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:52.893 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.893 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.893 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.893 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.893 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.893 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.893 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.893 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.893 [Child] Cleaning up... 00:08:52.893 Asynchronous Event Request test 00:08:52.893 Attached to 0000:00:13.0 00:08:52.893 Attached to 0000:00:11.0 00:08:52.893 Attached to 0000:00:10.0 00:08:52.893 Attached to 0000:00:12.0 00:08:52.893 Reset controller to setup AER completions for this process 00:08:52.893 Registering asynchronous event callbacks... 00:08:52.893 Getting orig temperature thresholds of all controllers 00:08:52.893 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.893 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.893 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.893 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.893 Setting all controllers temperature threshold low to trigger AER 00:08:52.893 Waiting for all controllers temperature threshold to be set lower 00:08:52.893 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.893 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:52.893 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.893 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:52.893 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.893 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:52.893 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.893 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:52.893 Waiting for all controllers to trigger AER and reset threshold 00:08:52.893 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.893 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.893 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.893 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.893 Cleaning up... 00:08:52.893 00:08:52.893 real 0m0.368s 00:08:52.893 user 0m0.123s 00:08:52.893 sys 0m0.147s 00:08:52.893 23:42:40 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.893 23:42:40 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:52.893 ************************************ 00:08:52.893 END TEST nvme_multi_aen 00:08:52.893 ************************************ 00:08:52.893 23:42:40 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:52.893 23:42:40 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:52.893 23:42:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.893 23:42:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.893 ************************************ 00:08:52.893 START TEST nvme_startup 00:08:52.893 ************************************ 00:08:52.893 23:42:40 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:53.152 Initializing NVMe Controllers 00:08:53.152 Attached to 0000:00:13.0 00:08:53.152 Attached to 0000:00:11.0 00:08:53.152 Attached to 0000:00:10.0 00:08:53.152 Attached to 0000:00:12.0 00:08:53.152 Initialization complete. 00:08:53.152 Time used:128659.977 (us). 00:08:53.152 00:08:53.152 real 0m0.182s 00:08:53.152 user 0m0.055s 00:08:53.152 sys 0m0.081s 00:08:53.152 23:42:41 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.152 23:42:41 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:53.152 ************************************ 00:08:53.152 END TEST nvme_startup 00:08:53.152 ************************************ 00:08:53.152 23:42:41 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:53.152 23:42:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.152 23:42:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.152 23:42:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.152 ************************************ 00:08:53.152 START TEST nvme_multi_secondary 00:08:53.152 ************************************ 00:08:53.152 23:42:41 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:53.152 23:42:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75361 00:08:53.152 23:42:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75362 00:08:53.152 23:42:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:53.152 23:42:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:53.152 23:42:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:56.439 Initializing NVMe Controllers 00:08:56.439 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.439 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.439 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.439 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.439 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:56.439 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:56.439 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:56.439 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:56.439 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:56.439 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:56.439 Initialization complete. Launching workers. 00:08:56.439 ======================================================== 00:08:56.439 Latency(us) 00:08:56.439 Device Information : IOPS MiB/s Average min max 00:08:56.439 PCIE (0000:00:13.0) NSID 1 from core 2: 3173.21 12.40 5041.73 1385.38 13097.16 00:08:56.439 PCIE (0000:00:11.0) NSID 1 from core 2: 3173.21 12.40 5041.85 1420.28 12277.56 00:08:56.439 PCIE (0000:00:10.0) NSID 1 from core 2: 3173.21 12.40 5040.50 1327.68 15315.75 00:08:56.439 PCIE (0000:00:12.0) NSID 1 from core 2: 3173.21 12.40 5041.86 1323.30 12892.20 00:08:56.439 PCIE (0000:00:12.0) NSID 2 from core 2: 3173.21 12.40 5041.38 1362.08 12819.99 00:08:56.439 PCIE (0000:00:12.0) NSID 3 from core 2: 3173.21 12.40 5041.31 1303.74 12803.03 00:08:56.439 ======================================================== 00:08:56.439 Total : 19039.28 74.37 5041.44 1303.74 15315.75 00:08:56.439 00:08:56.439 Initializing NVMe Controllers 00:08:56.439 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.439 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.439 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.439 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.439 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:56.439 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:56.439 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:56.439 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:56.439 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:56.439 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:56.439 Initialization complete. Launching workers. 00:08:56.439 ======================================================== 00:08:56.439 Latency(us) 00:08:56.439 Device Information : IOPS MiB/s Average min max 00:08:56.439 PCIE (0000:00:13.0) NSID 1 from core 1: 7466.02 29.16 2142.54 1063.20 5395.19 00:08:56.439 PCIE (0000:00:11.0) NSID 1 from core 1: 7466.02 29.16 2142.54 1053.64 5266.87 00:08:56.439 PCIE (0000:00:10.0) NSID 1 from core 1: 7466.02 29.16 2141.57 966.44 5693.59 00:08:56.439 PCIE (0000:00:12.0) NSID 1 from core 1: 7466.02 29.16 2142.47 1087.64 5743.02 00:08:56.439 PCIE (0000:00:12.0) NSID 2 from core 1: 7466.02 29.16 2142.42 1006.18 5930.81 00:08:56.439 PCIE (0000:00:12.0) NSID 3 from core 1: 7466.02 29.16 2142.40 1058.15 5584.01 00:08:56.439 ======================================================== 00:08:56.439 Total : 44796.13 174.98 2142.32 966.44 5930.81 00:08:56.439 00:08:56.439 23:42:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75361 00:08:58.341 Initializing NVMe Controllers 00:08:58.341 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:58.341 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:58.341 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.341 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:58.341 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:58.341 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:58.341 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:58.341 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:58.341 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:58.341 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:58.341 Initialization complete. Launching workers. 00:08:58.341 ======================================================== 00:08:58.341 Latency(us) 00:08:58.341 Device Information : IOPS MiB/s Average min max 00:08:58.341 PCIE (0000:00:13.0) NSID 1 from core 0: 10565.92 41.27 1513.91 712.09 5983.04 00:08:58.341 PCIE (0000:00:11.0) NSID 1 from core 0: 10564.32 41.27 1514.14 728.36 6268.64 00:08:58.341 PCIE (0000:00:10.0) NSID 1 from core 0: 10557.92 41.24 1514.20 707.67 6669.80 00:08:58.341 PCIE (0000:00:12.0) NSID 1 from core 0: 10565.92 41.27 1513.88 689.76 6334.48 00:08:58.341 PCIE (0000:00:12.0) NSID 2 from core 0: 10565.92 41.27 1513.86 689.89 5555.81 00:08:58.341 PCIE (0000:00:12.0) NSID 3 from core 0: 10565.92 41.27 1513.84 595.30 5357.89 00:08:58.341 ======================================================== 00:08:58.341 Total : 63385.91 247.60 1513.97 595.30 6669.80 00:08:58.341 00:08:58.341 23:42:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75362 00:08:58.341 23:42:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75431 00:08:58.341 23:42:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75432 00:08:58.341 23:42:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:58.341 23:42:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:58.341 23:42:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:01.620 Initializing NVMe Controllers 00:09:01.620 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.620 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:01.620 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:01.620 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:01.620 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:01.620 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:01.620 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:01.620 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:01.620 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:01.620 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:01.620 Initialization complete. Launching workers. 00:09:01.620 ======================================================== 00:09:01.620 Latency(us) 00:09:01.620 Device Information : IOPS MiB/s Average min max 00:09:01.620 PCIE (0000:00:13.0) NSID 1 from core 0: 7529.27 29.41 2124.60 780.81 5456.52 00:09:01.620 PCIE (0000:00:11.0) NSID 1 from core 0: 7529.27 29.41 2124.83 790.79 5312.89 00:09:01.620 PCIE (0000:00:10.0) NSID 1 from core 0: 7529.27 29.41 2123.91 766.61 5395.41 00:09:01.620 PCIE (0000:00:12.0) NSID 1 from core 0: 7529.27 29.41 2124.84 791.92 5200.56 00:09:01.620 PCIE (0000:00:12.0) NSID 2 from core 0: 7529.27 29.41 2124.83 772.51 5276.80 00:09:01.620 PCIE (0000:00:12.0) NSID 3 from core 0: 7529.27 29.41 2124.83 771.99 5285.78 00:09:01.620 ======================================================== 00:09:01.620 Total : 45175.63 176.47 2124.64 766.61 5456.52 00:09:01.620 00:09:01.620 Initializing NVMe Controllers 00:09:01.620 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.620 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:01.621 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:01.621 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:01.621 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:01.621 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:01.621 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:01.621 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:01.621 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:01.621 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:01.621 Initialization complete. Launching workers. 00:09:01.621 ======================================================== 00:09:01.621 Latency(us) 00:09:01.621 Device Information : IOPS MiB/s Average min max 00:09:01.621 PCIE (0000:00:13.0) NSID 1 from core 1: 7502.07 29.30 2132.25 781.45 6722.40 00:09:01.621 PCIE (0000:00:11.0) NSID 1 from core 1: 7502.07 29.30 2132.29 783.52 6557.66 00:09:01.621 PCIE (0000:00:10.0) NSID 1 from core 1: 7502.07 29.30 2131.40 774.04 6482.30 00:09:01.621 PCIE (0000:00:12.0) NSID 1 from core 1: 7502.07 29.30 2132.28 788.43 6008.57 00:09:01.621 PCIE (0000:00:12.0) NSID 2 from core 1: 7502.07 29.30 2132.21 783.53 6034.91 00:09:01.621 PCIE (0000:00:12.0) NSID 3 from core 1: 7502.07 29.30 2132.18 789.95 6778.37 00:09:01.621 ======================================================== 00:09:01.621 Total : 45012.45 175.83 2132.10 774.04 6778.37 00:09:01.621 00:09:03.519 Initializing NVMe Controllers 00:09:03.519 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:03.519 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:03.519 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:03.519 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:03.519 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:03.519 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:03.519 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:03.519 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:03.519 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:03.519 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:03.519 Initialization complete. Launching workers. 00:09:03.519 ======================================================== 00:09:03.519 Latency(us) 00:09:03.519 Device Information : IOPS MiB/s Average min max 00:09:03.519 PCIE (0000:00:13.0) NSID 1 from core 2: 4405.28 17.21 3631.06 810.82 18221.88 00:09:03.519 PCIE (0000:00:11.0) NSID 1 from core 2: 4405.28 17.21 3631.61 810.89 17934.69 00:09:03.519 PCIE (0000:00:10.0) NSID 1 from core 2: 4405.28 17.21 3630.00 788.54 17904.79 00:09:03.519 PCIE (0000:00:12.0) NSID 1 from core 2: 4405.28 17.21 3631.08 740.21 18114.83 00:09:03.519 PCIE (0000:00:12.0) NSID 2 from core 2: 4405.28 17.21 3631.35 769.11 18043.36 00:09:03.519 PCIE (0000:00:12.0) NSID 3 from core 2: 4405.28 17.21 3631.28 672.46 18091.88 00:09:03.519 ======================================================== 00:09:03.519 Total : 26431.71 103.25 3631.06 672.46 18221.88 00:09:03.519 00:09:03.780 ************************************ 00:09:03.780 END TEST nvme_multi_secondary 00:09:03.780 ************************************ 00:09:03.780 23:42:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75431 00:09:03.780 23:42:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75432 00:09:03.780 00:09:03.780 real 0m10.518s 00:09:03.780 user 0m18.287s 00:09:03.780 sys 0m0.541s 00:09:03.780 23:42:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.780 23:42:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:03.780 23:42:51 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:03.780 23:42:51 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:03.780 23:42:51 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74389 ]] 00:09:03.780 23:42:51 nvme -- common/autotest_common.sh@1094 -- # kill 74389 00:09:03.780 23:42:51 nvme -- common/autotest_common.sh@1095 -- # wait 74389 00:09:03.780 [2024-11-26 23:42:51.696453] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.696567] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.696592] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.696614] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.697333] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.697392] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.697413] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.697435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.698093] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.698150] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.698168] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.698190] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.698833] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.698894] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.698913] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.698932] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75310) is not found. Dropping the request. 00:09:03.780 [2024-11-26 23:42:51.763161] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:03.780 23:42:51 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:03.780 23:42:51 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:03.780 23:42:51 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:03.780 23:42:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.780 23:42:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.780 23:42:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.780 ************************************ 00:09:03.780 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:03.780 ************************************ 00:09:03.780 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:03.780 * Looking for test storage... 00:09:03.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:03.780 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.780 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.780 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.046 --rc genhtml_branch_coverage=1 00:09:04.046 --rc genhtml_function_coverage=1 00:09:04.046 --rc genhtml_legend=1 00:09:04.046 --rc geninfo_all_blocks=1 00:09:04.046 --rc geninfo_unexecuted_blocks=1 00:09:04.046 00:09:04.046 ' 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.046 --rc genhtml_branch_coverage=1 00:09:04.046 --rc genhtml_function_coverage=1 00:09:04.046 --rc genhtml_legend=1 00:09:04.046 --rc geninfo_all_blocks=1 00:09:04.046 --rc geninfo_unexecuted_blocks=1 00:09:04.046 00:09:04.046 ' 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.046 --rc genhtml_branch_coverage=1 00:09:04.046 --rc genhtml_function_coverage=1 00:09:04.046 --rc genhtml_legend=1 00:09:04.046 --rc geninfo_all_blocks=1 00:09:04.046 --rc geninfo_unexecuted_blocks=1 00:09:04.046 00:09:04.046 ' 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:04.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.046 --rc genhtml_branch_coverage=1 00:09:04.046 --rc genhtml_function_coverage=1 00:09:04.046 --rc genhtml_legend=1 00:09:04.046 --rc geninfo_all_blocks=1 00:09:04.046 --rc geninfo_unexecuted_blocks=1 00:09:04.046 00:09:04.046 ' 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:04.046 23:42:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75594 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75594 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75594 ']' 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.046 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:04.046 [2024-11-26 23:42:52.101021] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:09:04.046 [2024-11-26 23:42:52.101200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75594 ] 00:09:04.305 [2024-11-26 23:42:52.261000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.305 [2024-11-26 23:42:52.302088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.305 [2024-11-26 23:42:52.302383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.305 [2024-11-26 23:42:52.302631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.305 [2024-11-26 23:42:52.302677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.878 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.878 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:04.878 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:04.878 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.878 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.879 nvme0n1 00:09:04.879 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.879 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:04.879 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_8DtJI.txt 00:09:04.879 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:04.879 23:42:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.879 23:42:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.136 true 00:09:05.136 23:42:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.136 23:42:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:05.136 23:42:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732664573 00:09:05.136 23:42:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75617 00:09:05.136 23:42:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:05.136 23:42:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:05.136 23:42:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:07.035 [2024-11-26 23:42:55.018526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:07.035 [2024-11-26 23:42:55.018776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:07.035 [2024-11-26 23:42:55.018804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:07.035 [2024-11-26 23:42:55.018818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.035 [2024-11-26 23:42:55.020431] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:07.035 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75617 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75617 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75617 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:07.035 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_8DtJI.txt 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_8DtJI.txt 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75594 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75594 ']' 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75594 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75594 00:09:07.036 killing process with pid 75594 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75594' 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75594 00:09:07.036 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75594 00:09:07.605 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:07.605 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:07.605 ************************************ 00:09:07.605 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:07.605 ************************************ 00:09:07.605 00:09:07.605 real 0m3.638s 00:09:07.605 user 0m12.802s 00:09:07.605 sys 0m0.553s 00:09:07.605 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.605 23:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:07.605 23:42:55 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:07.605 23:42:55 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:07.605 23:42:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.605 23:42:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.605 23:42:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:07.605 ************************************ 00:09:07.605 START TEST nvme_fio 00:09:07.605 ************************************ 00:09:07.605 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:07.605 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:07.605 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:07.605 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:07.605 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:07.605 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:07.605 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:07.605 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:07.605 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:07.605 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:07.605 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:07.605 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:07.605 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:07.605 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:07.605 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:07.605 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:07.865 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:07.865 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:07.865 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:07.865 23:42:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:07.865 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:07.866 23:42:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:08.127 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:08.127 fio-3.35 00:09:08.127 Starting 1 thread 00:09:13.412 00:09:13.412 test: (groupid=0, jobs=1): err= 0: pid=75740: Tue Nov 26 23:43:00 2024 00:09:13.412 read: IOPS=17.2k, BW=67.4MiB/s (70.6MB/s)(135MiB/2001msec) 00:09:13.412 slat (usec): min=4, max=853, avg= 6.81, stdev= 5.45 00:09:13.412 clat (usec): min=559, max=13718, avg=3686.95, stdev=1271.89 00:09:13.412 lat (usec): min=572, max=13778, avg=3693.76, stdev=1273.35 00:09:13.412 clat percentiles (usec): 00:09:13.412 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2737], 00:09:13.412 | 30.00th=[ 2900], 40.00th=[ 3064], 50.00th=[ 3261], 60.00th=[ 3458], 00:09:13.412 | 70.00th=[ 3752], 80.00th=[ 4555], 90.00th=[ 5669], 95.00th=[ 6587], 00:09:13.412 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[10683], 00:09:13.412 | 99.99th=[13304] 00:09:13.412 bw ( KiB/s): min=63888, max=76536, per=100.00%, avg=71986.67, stdev=7031.45, samples=3 00:09:13.412 iops : min=15972, max=19134, avg=17996.67, stdev=1757.86, samples=3 00:09:13.412 write: IOPS=17.3k, BW=67.4MiB/s (70.7MB/s)(135MiB/2001msec); 0 zone resets 00:09:13.412 slat (usec): min=4, max=349, avg= 7.05, stdev= 3.53 00:09:13.412 clat (usec): min=485, max=13430, avg=3707.94, stdev=1254.79 00:09:13.412 lat (usec): min=497, max=13445, avg=3714.98, stdev=1256.21 00:09:13.412 clat percentiles (usec): 00:09:13.412 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2802], 00:09:13.412 | 30.00th=[ 2933], 40.00th=[ 3097], 50.00th=[ 3294], 60.00th=[ 3458], 00:09:13.412 | 70.00th=[ 3785], 80.00th=[ 4490], 90.00th=[ 5669], 95.00th=[ 6587], 00:09:13.412 | 99.00th=[ 7635], 99.50th=[ 7963], 99.90th=[ 8979], 99.95th=[10945], 00:09:13.412 | 99.99th=[13042] 00:09:13.412 bw ( KiB/s): min=63536, max=76792, per=100.00%, avg=71917.67, stdev=7290.85, samples=3 00:09:13.412 iops : min=15884, max=19198, avg=17979.33, stdev=1822.65, samples=3 00:09:13.412 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:13.412 lat (msec) : 2=0.11%, 4=74.18%, 10=25.63%, 20=0.07% 00:09:13.412 cpu : usr=98.90%, sys=0.00%, ctx=3, majf=0, minf=624 00:09:13.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.412 issued rwts: total=34506,34538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.412 00:09:13.412 Run status group 0 (all jobs): 00:09:13.412 READ: bw=67.4MiB/s (70.6MB/s), 67.4MiB/s-67.4MiB/s (70.6MB/s-70.6MB/s), io=135MiB (141MB), run=2001-2001msec 00:09:13.412 WRITE: bw=67.4MiB/s (70.7MB/s), 67.4MiB/s-67.4MiB/s (70.7MB/s-70.7MB/s), io=135MiB (141MB), run=2001-2001msec 00:09:13.412 ----------------------------------------------------- 00:09:13.412 Suppressions used: 00:09:13.412 count bytes template 00:09:13.412 1 32 /usr/src/fio/parse.c 00:09:13.412 1 8 libtcmalloc_minimal.so 00:09:13.412 ----------------------------------------------------- 00:09:13.412 00:09:13.412 23:43:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.412 23:43:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:13.412 23:43:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:13.412 23:43:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.412 23:43:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.412 23:43:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:13.674 23:43:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:13.674 23:43:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:13.674 23:43:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.674 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:13.674 fio-3.35 00:09:13.674 Starting 1 thread 00:09:20.260 00:09:20.260 test: (groupid=0, jobs=1): err= 0: pid=75801: Tue Nov 26 23:43:07 2024 00:09:20.260 read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec) 00:09:20.260 slat (usec): min=4, max=101, avg= 6.89, stdev= 3.13 00:09:20.260 clat (usec): min=219, max=10706, avg=3545.55, stdev=1153.22 00:09:20.260 lat (usec): min=226, max=10767, avg=3552.44, stdev=1154.83 00:09:20.260 clat percentiles (usec): 00:09:20.260 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2769], 00:09:20.260 | 30.00th=[ 2900], 40.00th=[ 3064], 50.00th=[ 3228], 60.00th=[ 3359], 00:09:20.260 | 70.00th=[ 3556], 80.00th=[ 3916], 90.00th=[ 5276], 95.00th=[ 6259], 00:09:20.260 | 99.00th=[ 7635], 99.50th=[ 8094], 99.90th=[ 9110], 99.95th=[ 9372], 00:09:20.260 | 99.99th=[10552] 00:09:20.260 bw ( KiB/s): min=60992, max=84584, per=97.09%, avg=69453.33, stdev=13134.14, samples=3 00:09:20.260 iops : min=15248, max=21146, avg=17363.33, stdev=3283.53, samples=3 00:09:20.260 write: IOPS=17.9k, BW=69.9MiB/s (73.2MB/s)(140MiB/2001msec); 0 zone resets 00:09:20.260 slat (usec): min=5, max=139, avg= 7.19, stdev= 3.17 00:09:20.260 clat (usec): min=280, max=10605, avg=3585.56, stdev=1164.00 00:09:20.260 lat (usec): min=288, max=10620, avg=3592.75, stdev=1165.58 00:09:20.260 clat percentiles (usec): 00:09:20.260 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2802], 00:09:20.260 | 30.00th=[ 2933], 40.00th=[ 3097], 50.00th=[ 3261], 60.00th=[ 3392], 00:09:20.260 | 70.00th=[ 3589], 80.00th=[ 3949], 90.00th=[ 5276], 95.00th=[ 6325], 00:09:20.260 | 99.00th=[ 7767], 99.50th=[ 8225], 99.90th=[ 9241], 99.95th=[ 9372], 00:09:20.260 | 99.99th=[10290] 00:09:20.260 bw ( KiB/s): min=60520, max=84528, per=97.02%, avg=69397.33, stdev=13169.06, samples=3 00:09:20.260 iops : min=15130, max=21132, avg=17349.33, stdev=3292.27, samples=3 00:09:20.260 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:20.260 lat (msec) : 2=0.10%, 4=80.78%, 10=19.06%, 20=0.02% 00:09:20.260 cpu : usr=99.00%, sys=0.00%, ctx=5, majf=0, minf=625 00:09:20.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:20.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.260 issued rwts: total=35787,35784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.260 00:09:20.260 Run status group 0 (all jobs): 00:09:20.260 READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:09:20.260 WRITE: bw=69.9MiB/s (73.2MB/s), 69.9MiB/s-69.9MiB/s (73.2MB/s-73.2MB/s), io=140MiB (147MB), run=2001-2001msec 00:09:20.260 ----------------------------------------------------- 00:09:20.260 Suppressions used: 00:09:20.260 count bytes template 00:09:20.260 1 32 /usr/src/fio/parse.c 00:09:20.260 1 8 libtcmalloc_minimal.so 00:09:20.260 ----------------------------------------------------- 00:09:20.260 00:09:20.260 23:43:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:20.260 23:43:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:20.260 23:43:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:20.260 23:43:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.260 23:43:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:20.260 23:43:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.260 23:43:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:20.260 23:43:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:20.260 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.261 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:20.261 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:20.261 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:20.261 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:20.261 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:20.261 23:43:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.261 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:20.261 fio-3.35 00:09:20.261 Starting 1 thread 00:09:25.596 00:09:25.596 test: (groupid=0, jobs=1): err= 0: pid=75857: Tue Nov 26 23:43:12 2024 00:09:25.596 read: IOPS=14.6k, BW=57.2MiB/s (59.9MB/s)(114MiB/2001msec) 00:09:25.596 slat (nsec): min=6000, max=89868, avg=8088.87, stdev=3753.67 00:09:25.596 clat (usec): min=391, max=10238, avg=4332.42, stdev=1366.18 00:09:25.596 lat (usec): min=398, max=10252, avg=4340.51, stdev=1367.66 00:09:25.596 clat percentiles (usec): 00:09:25.596 | 1.00th=[ 2769], 5.00th=[ 2999], 10.00th=[ 3130], 20.00th=[ 3294], 00:09:25.596 | 30.00th=[ 3425], 40.00th=[ 3556], 50.00th=[ 3720], 60.00th=[ 4047], 00:09:25.596 | 70.00th=[ 4752], 80.00th=[ 5538], 90.00th=[ 6521], 95.00th=[ 7177], 00:09:25.596 | 99.00th=[ 8160], 99.50th=[ 8586], 99.90th=[ 9241], 99.95th=[ 9634], 00:09:25.596 | 99.99th=[ 9896] 00:09:25.597 bw ( KiB/s): min=57120, max=61800, per=100.00%, avg=59157.33, stdev=2398.00, samples=3 00:09:25.597 iops : min=14280, max=15450, avg=14789.33, stdev=599.50, samples=3 00:09:25.597 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(115MiB/2001msec); 0 zone resets 00:09:25.597 slat (nsec): min=6177, max=75275, avg=8462.20, stdev=3845.60 00:09:25.597 clat (usec): min=450, max=9860, avg=4374.11, stdev=1355.09 00:09:25.597 lat (usec): min=465, max=9876, avg=4382.58, stdev=1356.51 00:09:25.597 clat percentiles (usec): 00:09:25.597 | 1.00th=[ 2835], 5.00th=[ 3064], 10.00th=[ 3163], 20.00th=[ 3326], 00:09:25.597 | 30.00th=[ 3458], 40.00th=[ 3589], 50.00th=[ 3785], 60.00th=[ 4113], 00:09:25.597 | 70.00th=[ 4752], 80.00th=[ 5604], 90.00th=[ 6587], 95.00th=[ 7242], 00:09:25.597 | 99.00th=[ 8160], 99.50th=[ 8586], 99.90th=[ 9241], 99.95th=[ 9503], 00:09:25.597 | 99.99th=[ 9765] 00:09:25.597 bw ( KiB/s): min=56288, max=62176, per=100.00%, avg=58986.67, stdev=2974.51, samples=3 00:09:25.597 iops : min=14072, max=15544, avg=14746.67, stdev=743.63, samples=3 00:09:25.597 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.02% 00:09:25.597 lat (msec) : 2=0.06%, 4=58.13%, 10=41.76%, 20=0.01% 00:09:25.597 cpu : usr=98.50%, sys=0.20%, ctx=3, majf=0, minf=624 00:09:25.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:25.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.597 issued rwts: total=29286,29344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.597 00:09:25.597 Run status group 0 (all jobs): 00:09:25.597 READ: bw=57.2MiB/s (59.9MB/s), 57.2MiB/s-57.2MiB/s (59.9MB/s-59.9MB/s), io=114MiB (120MB), run=2001-2001msec 00:09:25.597 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=115MiB (120MB), run=2001-2001msec 00:09:25.597 ----------------------------------------------------- 00:09:25.597 Suppressions used: 00:09:25.597 count bytes template 00:09:25.597 1 32 /usr/src/fio/parse.c 00:09:25.597 1 8 libtcmalloc_minimal.so 00:09:25.597 ----------------------------------------------------- 00:09:25.597 00:09:25.597 23:43:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:25.597 23:43:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:25.597 23:43:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:25.597 23:43:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:25.597 23:43:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:25.597 23:43:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:25.597 23:43:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:25.597 23:43:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:25.597 23:43:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:25.857 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:25.857 fio-3.35 00:09:25.857 Starting 1 thread 00:09:29.160 00:09:29.160 test: (groupid=0, jobs=1): err= 0: pid=75912: Tue Nov 26 23:43:16 2024 00:09:29.160 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(90.2MiB/2001msec) 00:09:29.160 slat (usec): min=6, max=167, avg= 7.96, stdev= 3.93 00:09:29.160 clat (usec): min=522, max=13716, avg=3869.58, stdev=1519.36 00:09:29.160 lat (usec): min=529, max=13776, avg=3877.54, stdev=1520.20 00:09:29.160 clat percentiles (usec): 00:09:29.160 | 1.00th=[ 1434], 5.00th=[ 1745], 10.00th=[ 2040], 20.00th=[ 2573], 00:09:29.160 | 30.00th=[ 3032], 40.00th=[ 3326], 50.00th=[ 3589], 60.00th=[ 3884], 00:09:29.160 | 70.00th=[ 4424], 80.00th=[ 5211], 90.00th=[ 6063], 95.00th=[ 6718], 00:09:29.160 | 99.00th=[ 7832], 99.50th=[ 8455], 99.90th=[10683], 99.95th=[13173], 00:09:29.160 | 99.99th=[13698] 00:09:29.160 bw ( KiB/s): min=40736, max=52455, per=100.00%, avg=47146.33, stdev=5936.67, samples=3 00:09:29.160 iops : min=10184, max=13113, avg=11786.33, stdev=1483.83, samples=3 00:09:29.160 write: IOPS=11.5k, BW=44.8MiB/s (46.9MB/s)(89.6MiB/2001msec); 0 zone resets 00:09:29.160 slat (nsec): min=6303, max=86872, avg=8326.48, stdev=3801.98 00:09:29.160 clat (usec): min=557, max=48444, avg=7237.38, stdev=9624.57 00:09:29.160 lat (usec): min=564, max=48452, avg=7245.70, stdev=9624.61 00:09:29.160 clat percentiles (usec): 00:09:29.160 | 1.00th=[ 1516], 5.00th=[ 1926], 10.00th=[ 2311], 20.00th=[ 2933], 00:09:29.160 | 30.00th=[ 3326], 40.00th=[ 3556], 50.00th=[ 3884], 60.00th=[ 4424], 00:09:29.160 | 70.00th=[ 5276], 80.00th=[ 6194], 90.00th=[20317], 95.00th=[35914], 00:09:29.160 | 99.00th=[41681], 99.50th=[43779], 99.90th=[45876], 99.95th=[46400], 00:09:29.160 | 99.99th=[47449] 00:09:29.160 bw ( KiB/s): min=41256, max=52694, per=100.00%, avg=47084.67, stdev=5722.15, samples=3 00:09:29.160 iops : min=10314, max=13173, avg=11771.00, stdev=1430.29, samples=3 00:09:29.160 lat (usec) : 750=0.04%, 1000=0.02% 00:09:29.160 lat (msec) : 2=7.60%, 4=50.49%, 10=36.24%, 20=0.59%, 50=5.02% 00:09:29.160 cpu : usr=98.70%, sys=0.10%, ctx=7, majf=0, minf=623 00:09:29.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:29.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.160 issued rwts: total=23093,22933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.160 00:09:29.160 Run status group 0 (all jobs): 00:09:29.160 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.2MiB (94.6MB), run=2001-2001msec 00:09:29.160 WRITE: bw=44.8MiB/s (46.9MB/s), 44.8MiB/s-44.8MiB/s (46.9MB/s-46.9MB/s), io=89.6MiB (93.9MB), run=2001-2001msec 00:09:29.160 ----------------------------------------------------- 00:09:29.160 Suppressions used: 00:09:29.160 count bytes template 00:09:29.160 1 32 /usr/src/fio/parse.c 00:09:29.160 1 8 libtcmalloc_minimal.so 00:09:29.160 ----------------------------------------------------- 00:09:29.160 00:09:29.160 ************************************ 00:09:29.160 END TEST nvme_fio 00:09:29.160 ************************************ 00:09:29.160 23:43:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:29.160 23:43:16 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:29.160 00:09:29.160 real 0m21.489s 00:09:29.160 user 0m14.958s 00:09:29.160 sys 0m10.162s 00:09:29.160 23:43:16 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.160 23:43:16 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:29.160 ************************************ 00:09:29.160 END TEST nvme 00:09:29.160 ************************************ 00:09:29.160 00:09:29.160 real 1m29.905s 00:09:29.160 user 3m30.188s 00:09:29.160 sys 0m21.025s 00:09:29.160 23:43:17 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.160 23:43:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.160 23:43:17 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:29.160 23:43:17 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:29.160 23:43:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.160 23:43:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.160 23:43:17 -- common/autotest_common.sh@10 -- # set +x 00:09:29.160 ************************************ 00:09:29.160 START TEST nvme_scc 00:09:29.160 ************************************ 00:09:29.160 23:43:17 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:29.160 * Looking for test storage... 00:09:29.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:29.160 23:43:17 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.160 23:43:17 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.160 23:43:17 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.160 23:43:17 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.160 23:43:17 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:29.160 23:43:17 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.160 23:43:17 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.160 --rc genhtml_branch_coverage=1 00:09:29.160 --rc genhtml_function_coverage=1 00:09:29.160 --rc genhtml_legend=1 00:09:29.160 --rc geninfo_all_blocks=1 00:09:29.160 --rc geninfo_unexecuted_blocks=1 00:09:29.160 00:09:29.160 ' 00:09:29.160 23:43:17 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.160 --rc genhtml_branch_coverage=1 00:09:29.160 --rc genhtml_function_coverage=1 00:09:29.160 --rc genhtml_legend=1 00:09:29.160 --rc geninfo_all_blocks=1 00:09:29.161 --rc geninfo_unexecuted_blocks=1 00:09:29.161 00:09:29.161 ' 00:09:29.161 23:43:17 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.161 --rc genhtml_branch_coverage=1 00:09:29.161 --rc genhtml_function_coverage=1 00:09:29.161 --rc genhtml_legend=1 00:09:29.161 --rc geninfo_all_blocks=1 00:09:29.161 --rc geninfo_unexecuted_blocks=1 00:09:29.161 00:09:29.161 ' 00:09:29.161 23:43:17 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.161 --rc genhtml_branch_coverage=1 00:09:29.161 --rc genhtml_function_coverage=1 00:09:29.161 --rc genhtml_legend=1 00:09:29.161 --rc geninfo_all_blocks=1 00:09:29.161 --rc geninfo_unexecuted_blocks=1 00:09:29.161 00:09:29.161 ' 00:09:29.161 23:43:17 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.161 23:43:17 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.161 23:43:17 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.161 23:43:17 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.161 23:43:17 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.161 23:43:17 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.161 23:43:17 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.161 23:43:17 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.161 23:43:17 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:29.161 23:43:17 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:29.161 23:43:17 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:29.161 23:43:17 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.161 23:43:17 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:29.161 23:43:17 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:29.161 23:43:17 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:29.161 23:43:17 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:29.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.733 Waiting for block devices as requested 00:09:29.733 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.995 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.995 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.995 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.294 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:35.294 23:43:23 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:35.294 23:43:23 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.294 23:43:23 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:35.294 23:43:23 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.294 23:43:23 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.294 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:35.295 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.296 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.297 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.298 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.299 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.300 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:35.301 23:43:23 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.301 23:43:23 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:35.301 23:43:23 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.301 23:43:23 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:35.301 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.302 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:35.303 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.304 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.305 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.306 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:35.307 23:43:23 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.307 23:43:23 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:35.307 23:43:23 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.307 23:43:23 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.307 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.308 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.309 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.310 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:35.579 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.580 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:35.581 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.582 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.583 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.584 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:35.585 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.586 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.587 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.588 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:35.589 23:43:23 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.589 23:43:23 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:35.589 23:43:23 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.589 23:43:23 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.589 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:35.590 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.591 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:35.592 23:43:23 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:35.592 23:43:23 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:35.592 23:43:23 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:35.592 23:43:23 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:35.592 23:43:23 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:36.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.736 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.736 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.736 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.736 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.998 23:43:24 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:36.998 23:43:24 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:36.998 23:43:24 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.998 23:43:24 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:36.998 ************************************ 00:09:36.998 START TEST nvme_simple_copy 00:09:36.998 ************************************ 00:09:36.998 23:43:24 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:37.259 Initializing NVMe Controllers 00:09:37.259 Attaching to 0000:00:10.0 00:09:37.259 Controller supports SCC. Attached to 0000:00:10.0 00:09:37.259 Namespace ID: 1 size: 6GB 00:09:37.259 Initialization complete. 00:09:37.259 00:09:37.259 Controller QEMU NVMe Ctrl (12340 ) 00:09:37.259 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:37.259 Namespace Block Size:4096 00:09:37.259 Writing LBAs 0 to 63 with Random Data 00:09:37.259 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:37.259 LBAs matching Written Data: 64 00:09:37.259 00:09:37.259 real 0m0.264s 00:09:37.259 user 0m0.105s 00:09:37.259 sys 0m0.057s 00:09:37.259 ************************************ 00:09:37.259 END TEST nvme_simple_copy 00:09:37.259 ************************************ 00:09:37.259 23:43:25 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.259 23:43:25 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:37.259 ************************************ 00:09:37.259 END TEST nvme_scc 00:09:37.259 ************************************ 00:09:37.259 00:09:37.259 real 0m8.113s 00:09:37.259 user 0m1.209s 00:09:37.259 sys 0m1.598s 00:09:37.259 23:43:25 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.259 23:43:25 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:37.259 23:43:25 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:37.259 23:43:25 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:37.259 23:43:25 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:37.259 23:43:25 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:37.259 23:43:25 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:37.259 23:43:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.260 23:43:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.260 23:43:25 -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 ************************************ 00:09:37.260 START TEST nvme_fdp 00:09:37.260 ************************************ 00:09:37.260 23:43:25 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:37.260 * Looking for test storage... 00:09:37.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:37.260 23:43:25 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.260 23:43:25 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.260 23:43:25 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.522 23:43:25 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:37.522 23:43:25 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.522 23:43:25 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.522 --rc genhtml_branch_coverage=1 00:09:37.522 --rc genhtml_function_coverage=1 00:09:37.522 --rc genhtml_legend=1 00:09:37.522 --rc geninfo_all_blocks=1 00:09:37.522 --rc geninfo_unexecuted_blocks=1 00:09:37.522 00:09:37.522 ' 00:09:37.522 23:43:25 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.522 --rc genhtml_branch_coverage=1 00:09:37.522 --rc genhtml_function_coverage=1 00:09:37.522 --rc genhtml_legend=1 00:09:37.522 --rc geninfo_all_blocks=1 00:09:37.522 --rc geninfo_unexecuted_blocks=1 00:09:37.522 00:09:37.522 ' 00:09:37.522 23:43:25 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.522 --rc genhtml_branch_coverage=1 00:09:37.522 --rc genhtml_function_coverage=1 00:09:37.522 --rc genhtml_legend=1 00:09:37.522 --rc geninfo_all_blocks=1 00:09:37.522 --rc geninfo_unexecuted_blocks=1 00:09:37.522 00:09:37.522 ' 00:09:37.522 23:43:25 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.522 --rc genhtml_branch_coverage=1 00:09:37.522 --rc genhtml_function_coverage=1 00:09:37.522 --rc genhtml_legend=1 00:09:37.522 --rc geninfo_all_blocks=1 00:09:37.522 --rc geninfo_unexecuted_blocks=1 00:09:37.522 00:09:37.522 ' 00:09:37.522 23:43:25 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.522 23:43:25 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.522 23:43:25 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.522 23:43:25 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.522 23:43:25 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.522 23:43:25 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:37.522 23:43:25 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:37.522 23:43:25 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:37.522 23:43:25 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.522 23:43:25 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.046 Waiting for block devices as requested 00:09:38.046 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.047 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.047 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.307 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.607 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:43.607 23:43:31 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:43.607 23:43:31 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:43.607 23:43:31 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:43.607 23:43:31 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.607 23:43:31 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:43.607 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.608 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:43.609 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.610 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:43.611 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.612 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:43.613 23:43:31 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:43.613 23:43:31 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:43.613 23:43:31 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.613 23:43:31 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:43.613 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.614 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.615 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:43.616 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.617 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.618 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:43.619 23:43:31 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:43.619 23:43:31 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:43.619 23:43:31 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.619 23:43:31 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:43.619 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.620 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.621 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.622 23:43:31 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:43.623 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.624 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.625 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:43.626 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:43.627 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.628 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.629 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.630 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:43.631 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.632 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.893 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:43.894 23:43:31 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:43.894 23:43:31 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:43.894 23:43:31 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.894 23:43:31 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.894 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:43.895 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:43.896 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:43.897 23:43:31 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:43.897 23:43:31 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:43.897 23:43:31 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:43.897 23:43:31 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:44.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.739 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.739 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.000 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.000 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.000 23:43:32 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:45.000 23:43:32 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:45.000 23:43:32 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.000 23:43:32 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:45.000 ************************************ 00:09:45.000 START TEST nvme_flexible_data_placement 00:09:45.000 ************************************ 00:09:45.000 23:43:32 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:45.261 Initializing NVMe Controllers 00:09:45.261 Attaching to 0000:00:13.0 00:09:45.261 Controller supports FDP Attached to 0000:00:13.0 00:09:45.261 Namespace ID: 1 Endurance Group ID: 1 00:09:45.261 Initialization complete. 00:09:45.261 00:09:45.261 ================================== 00:09:45.261 == FDP tests for Namespace: #01 == 00:09:45.261 ================================== 00:09:45.261 00:09:45.261 Get Feature: FDP: 00:09:45.261 ================= 00:09:45.261 Enabled: Yes 00:09:45.261 FDP configuration Index: 0 00:09:45.261 00:09:45.261 FDP configurations log page 00:09:45.261 =========================== 00:09:45.261 Number of FDP configurations: 1 00:09:45.261 Version: 0 00:09:45.261 Size: 112 00:09:45.261 FDP Configuration Descriptor: 0 00:09:45.261 Descriptor Size: 96 00:09:45.261 Reclaim Group Identifier format: 2 00:09:45.261 FDP Volatile Write Cache: Not Present 00:09:45.261 FDP Configuration: Valid 00:09:45.261 Vendor Specific Size: 0 00:09:45.261 Number of Reclaim Groups: 2 00:09:45.261 Number of Recalim Unit Handles: 8 00:09:45.261 Max Placement Identifiers: 128 00:09:45.261 Number of Namespaces Suppprted: 256 00:09:45.261 Reclaim unit Nominal Size: 6000000 bytes 00:09:45.261 Estimated Reclaim Unit Time Limit: Not Reported 00:09:45.261 RUH Desc #000: RUH Type: Initially Isolated 00:09:45.261 RUH Desc #001: RUH Type: Initially Isolated 00:09:45.261 RUH Desc #002: RUH Type: Initially Isolated 00:09:45.261 RUH Desc #003: RUH Type: Initially Isolated 00:09:45.261 RUH Desc #004: RUH Type: Initially Isolated 00:09:45.261 RUH Desc #005: RUH Type: Initially Isolated 00:09:45.261 RUH Desc #006: RUH Type: Initially Isolated 00:09:45.261 RUH Desc #007: RUH Type: Initially Isolated 00:09:45.261 00:09:45.261 FDP reclaim unit handle usage log page 00:09:45.261 ====================================== 00:09:45.261 Number of Reclaim Unit Handles: 8 00:09:45.261 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:45.261 RUH Usage Desc #001: RUH Attributes: Unused 00:09:45.261 RUH Usage Desc #002: RUH Attributes: Unused 00:09:45.261 RUH Usage Desc #003: RUH Attributes: Unused 00:09:45.261 RUH Usage Desc #004: RUH Attributes: Unused 00:09:45.261 RUH Usage Desc #005: RUH Attributes: Unused 00:09:45.261 RUH Usage Desc #006: RUH Attributes: Unused 00:09:45.261 RUH Usage Desc #007: RUH Attributes: Unused 00:09:45.261 00:09:45.261 FDP statistics log page 00:09:45.261 ======================= 00:09:45.261 Host bytes with metadata written: 2090598400 00:09:45.261 Media bytes with metadata written: 2091737088 00:09:45.261 Media bytes erased: 0 00:09:45.261 00:09:45.261 FDP Reclaim unit handle status 00:09:45.261 ============================== 00:09:45.261 Number of RUHS descriptors: 2 00:09:45.261 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001640 00:09:45.261 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:45.261 00:09:45.261 FDP write on placement id: 0 success 00:09:45.261 00:09:45.261 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:45.261 00:09:45.261 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:45.261 00:09:45.261 Get Feature: FDP Events for Placement handle: #0 00:09:45.261 ======================== 00:09:45.261 Number of FDP Events: 6 00:09:45.261 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:45.261 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:45.261 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:45.261 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:45.261 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:45.261 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:45.261 00:09:45.261 FDP events log page 00:09:45.261 =================== 00:09:45.261 Number of FDP events: 1 00:09:45.261 FDP Event #0: 00:09:45.261 Event Type: RU Not Written to Capacity 00:09:45.261 Placement Identifier: Valid 00:09:45.261 NSID: Valid 00:09:45.261 Location: Valid 00:09:45.261 Placement Identifier: 0 00:09:45.261 Event Timestamp: 5 00:09:45.261 Namespace Identifier: 1 00:09:45.261 Reclaim Group Identifier: 0 00:09:45.261 Reclaim Unit Handle Identifier: 0 00:09:45.261 00:09:45.261 FDP test passed 00:09:45.261 ************************************ 00:09:45.261 END TEST nvme_flexible_data_placement 00:09:45.261 ************************************ 00:09:45.261 00:09:45.261 real 0m0.232s 00:09:45.261 user 0m0.062s 00:09:45.261 sys 0m0.068s 00:09:45.261 23:43:33 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.261 23:43:33 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:45.261 00:09:45.261 real 0m7.989s 00:09:45.261 user 0m1.129s 00:09:45.261 sys 0m1.555s 00:09:45.261 23:43:33 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.261 23:43:33 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:45.261 ************************************ 00:09:45.261 END TEST nvme_fdp 00:09:45.261 ************************************ 00:09:45.261 23:43:33 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:45.261 23:43:33 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:45.261 23:43:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.261 23:43:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.261 23:43:33 -- common/autotest_common.sh@10 -- # set +x 00:09:45.261 ************************************ 00:09:45.261 START TEST nvme_rpc 00:09:45.261 ************************************ 00:09:45.261 23:43:33 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:45.523 * Looking for test storage... 00:09:45.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.523 23:43:33 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.523 --rc genhtml_branch_coverage=1 00:09:45.523 --rc genhtml_function_coverage=1 00:09:45.523 --rc genhtml_legend=1 00:09:45.523 --rc geninfo_all_blocks=1 00:09:45.523 --rc geninfo_unexecuted_blocks=1 00:09:45.523 00:09:45.523 ' 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.523 --rc genhtml_branch_coverage=1 00:09:45.523 --rc genhtml_function_coverage=1 00:09:45.523 --rc genhtml_legend=1 00:09:45.523 --rc geninfo_all_blocks=1 00:09:45.523 --rc geninfo_unexecuted_blocks=1 00:09:45.523 00:09:45.523 ' 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.523 --rc genhtml_branch_coverage=1 00:09:45.523 --rc genhtml_function_coverage=1 00:09:45.523 --rc genhtml_legend=1 00:09:45.523 --rc geninfo_all_blocks=1 00:09:45.523 --rc geninfo_unexecuted_blocks=1 00:09:45.523 00:09:45.523 ' 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.523 --rc genhtml_branch_coverage=1 00:09:45.523 --rc genhtml_function_coverage=1 00:09:45.523 --rc genhtml_legend=1 00:09:45.523 --rc geninfo_all_blocks=1 00:09:45.523 --rc geninfo_unexecuted_blocks=1 00:09:45.523 00:09:45.523 ' 00:09:45.523 23:43:33 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.523 23:43:33 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:45.523 23:43:33 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:45.523 23:43:33 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77302 00:09:45.523 23:43:33 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:45.523 23:43:33 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77302 00:09:45.523 23:43:33 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77302 ']' 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.523 23:43:33 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.784 [2024-11-26 23:43:33.659082] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:09:45.784 [2024-11-26 23:43:33.659497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77302 ] 00:09:45.784 [2024-11-26 23:43:33.804424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.784 [2024-11-26 23:43:33.847456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.784 [2024-11-26 23:43:33.847516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.726 23:43:34 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.726 23:43:34 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:46.726 23:43:34 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:46.726 Nvme0n1 00:09:46.726 23:43:34 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:46.726 23:43:34 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:46.987 request: 00:09:46.987 { 00:09:46.987 "bdev_name": "Nvme0n1", 00:09:46.987 "filename": "non_existing_file", 00:09:46.987 "method": "bdev_nvme_apply_firmware", 00:09:46.987 "req_id": 1 00:09:46.987 } 00:09:46.987 Got JSON-RPC error response 00:09:46.987 response: 00:09:46.987 { 00:09:46.987 "code": -32603, 00:09:46.987 "message": "open file failed." 00:09:46.987 } 00:09:46.987 23:43:34 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:46.987 23:43:34 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:46.987 23:43:34 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:47.247 23:43:35 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:47.247 23:43:35 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77302 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77302 ']' 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77302 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77302 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.247 killing process with pid 77302 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77302' 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77302 00:09:47.247 23:43:35 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77302 00:09:47.819 ************************************ 00:09:47.819 END TEST nvme_rpc 00:09:47.819 ************************************ 00:09:47.819 00:09:47.819 real 0m2.360s 00:09:47.819 user 0m4.354s 00:09:47.820 sys 0m0.714s 00:09:47.820 23:43:35 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.820 23:43:35 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.820 23:43:35 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:47.820 23:43:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.820 23:43:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.820 23:43:35 -- common/autotest_common.sh@10 -- # set +x 00:09:47.820 ************************************ 00:09:47.820 START TEST nvme_rpc_timeouts 00:09:47.820 ************************************ 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:47.820 * Looking for test storage... 00:09:47.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.820 23:43:35 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.820 --rc genhtml_branch_coverage=1 00:09:47.820 --rc genhtml_function_coverage=1 00:09:47.820 --rc genhtml_legend=1 00:09:47.820 --rc geninfo_all_blocks=1 00:09:47.820 --rc geninfo_unexecuted_blocks=1 00:09:47.820 00:09:47.820 ' 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.820 --rc genhtml_branch_coverage=1 00:09:47.820 --rc genhtml_function_coverage=1 00:09:47.820 --rc genhtml_legend=1 00:09:47.820 --rc geninfo_all_blocks=1 00:09:47.820 --rc geninfo_unexecuted_blocks=1 00:09:47.820 00:09:47.820 ' 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.820 --rc genhtml_branch_coverage=1 00:09:47.820 --rc genhtml_function_coverage=1 00:09:47.820 --rc genhtml_legend=1 00:09:47.820 --rc geninfo_all_blocks=1 00:09:47.820 --rc geninfo_unexecuted_blocks=1 00:09:47.820 00:09:47.820 ' 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.820 --rc genhtml_branch_coverage=1 00:09:47.820 --rc genhtml_function_coverage=1 00:09:47.820 --rc genhtml_legend=1 00:09:47.820 --rc geninfo_all_blocks=1 00:09:47.820 --rc geninfo_unexecuted_blocks=1 00:09:47.820 00:09:47.820 ' 00:09:47.820 23:43:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.820 23:43:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77356 00:09:47.820 23:43:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77356 00:09:47.820 23:43:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77393 00:09:47.820 23:43:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:47.820 23:43:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:47.820 23:43:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77393 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77393 ']' 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.820 23:43:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:48.082 [2024-11-26 23:43:36.005846] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:09:48.082 [2024-11-26 23:43:36.006254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77393 ] 00:09:48.082 [2024-11-26 23:43:36.163679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.082 [2024-11-26 23:43:36.206338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.082 [2024-11-26 23:43:36.206427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.026 Checking default timeout settings: 00:09:49.026 23:43:36 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.026 23:43:36 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:49.026 23:43:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:49.026 23:43:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:49.287 Making settings changes with rpc: 00:09:49.287 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:49.287 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:49.548 Check default vs. modified settings: 00:09:49.548 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:49.548 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77356 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77356 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.808 Setting action_on_timeout is changed as expected. 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77356 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77356 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.808 Setting timeout_us is changed as expected. 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77356 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77356 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.808 Setting timeout_admin_us is changed as expected. 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77356 /tmp/settings_modified_77356 00:09:49.808 23:43:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77393 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77393 ']' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77393 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77393 00:09:49.808 killing process with pid 77393 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77393' 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77393 00:09:49.808 23:43:37 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77393 00:09:50.380 RPC TIMEOUT SETTING TEST PASSED. 00:09:50.380 23:43:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:50.380 00:09:50.380 real 0m2.574s 00:09:50.380 user 0m4.934s 00:09:50.380 sys 0m0.703s 00:09:50.380 ************************************ 00:09:50.380 END TEST nvme_rpc_timeouts 00:09:50.380 ************************************ 00:09:50.380 23:43:38 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.380 23:43:38 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:50.380 23:43:38 -- spdk/autotest.sh@239 -- # uname -s 00:09:50.380 23:43:38 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:50.380 23:43:38 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:50.380 23:43:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.380 23:43:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.380 23:43:38 -- common/autotest_common.sh@10 -- # set +x 00:09:50.380 ************************************ 00:09:50.380 START TEST sw_hotplug 00:09:50.380 ************************************ 00:09:50.380 23:43:38 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:50.380 * Looking for test storage... 00:09:50.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.380 23:43:38 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.380 23:43:38 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.380 23:43:38 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.641 23:43:38 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.641 23:43:38 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:50.641 23:43:38 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.641 23:43:38 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.641 --rc genhtml_branch_coverage=1 00:09:50.641 --rc genhtml_function_coverage=1 00:09:50.641 --rc genhtml_legend=1 00:09:50.641 --rc geninfo_all_blocks=1 00:09:50.641 --rc geninfo_unexecuted_blocks=1 00:09:50.641 00:09:50.641 ' 00:09:50.641 23:43:38 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.641 --rc genhtml_branch_coverage=1 00:09:50.641 --rc genhtml_function_coverage=1 00:09:50.641 --rc genhtml_legend=1 00:09:50.641 --rc geninfo_all_blocks=1 00:09:50.641 --rc geninfo_unexecuted_blocks=1 00:09:50.641 00:09:50.641 ' 00:09:50.641 23:43:38 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.641 --rc genhtml_branch_coverage=1 00:09:50.641 --rc genhtml_function_coverage=1 00:09:50.641 --rc genhtml_legend=1 00:09:50.641 --rc geninfo_all_blocks=1 00:09:50.641 --rc geninfo_unexecuted_blocks=1 00:09:50.641 00:09:50.641 ' 00:09:50.641 23:43:38 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.641 --rc genhtml_branch_coverage=1 00:09:50.641 --rc genhtml_function_coverage=1 00:09:50.641 --rc genhtml_legend=1 00:09:50.641 --rc geninfo_all_blocks=1 00:09:50.641 --rc geninfo_unexecuted_blocks=1 00:09:50.641 00:09:50.641 ' 00:09:50.641 23:43:38 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.903 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:50.903 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:50.903 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:50.903 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.177 23:43:39 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:51.177 23:43:39 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:51.177 23:43:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:51.177 23:43:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:51.177 23:43:39 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:51.177 23:43:39 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:51.177 23:43:39 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:51.177 23:43:39 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:51.177 23:43:39 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:51.178 23:43:39 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:51.178 23:43:39 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:51.178 23:43:39 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:51.178 23:43:39 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:51.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.706 Waiting for block devices as requested 00:09:51.706 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.706 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.706 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.976 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.265 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:57.265 23:43:44 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:57.265 23:43:44 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.535 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:57.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.535 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:57.806 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:58.067 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.067 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:58.067 23:43:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78242 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:58.067 23:43:46 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:58.067 23:43:46 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:58.067 23:43:46 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:58.067 23:43:46 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:58.067 23:43:46 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:58.067 23:43:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:58.328 Initializing NVMe Controllers 00:09:58.328 Attaching to 0000:00:10.0 00:09:58.328 Attaching to 0000:00:11.0 00:09:58.328 Attached to 0000:00:10.0 00:09:58.328 Attached to 0000:00:11.0 00:09:58.328 Initialization complete. Starting I/O... 00:09:58.328 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:58.328 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:58.328 00:09:59.263 QEMU NVMe Ctrl (12340 ): 3107 I/Os completed (+3107) 00:09:59.263 QEMU NVMe Ctrl (12341 ): 3035 I/Os completed (+3035) 00:09:59.263 00:10:00.635 QEMU NVMe Ctrl (12340 ): 7349 I/Os completed (+4242) 00:10:00.635 QEMU NVMe Ctrl (12341 ): 7197 I/Os completed (+4162) 00:10:00.635 00:10:01.577 QEMU NVMe Ctrl (12340 ): 10882 I/Os completed (+3533) 00:10:01.577 QEMU NVMe Ctrl (12341 ): 10918 I/Os completed (+3721) 00:10:01.577 00:10:02.520 QEMU NVMe Ctrl (12340 ): 13990 I/Os completed (+3108) 00:10:02.520 QEMU NVMe Ctrl (12341 ): 14108 I/Os completed (+3190) 00:10:02.520 00:10:03.462 QEMU NVMe Ctrl (12340 ): 17149 I/Os completed (+3159) 00:10:03.462 QEMU NVMe Ctrl (12341 ): 17280 I/Os completed (+3172) 00:10:03.462 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.406 [2024-11-26 23:43:52.190516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:04.406 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:04.406 [2024-11-26 23:43:52.191914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.192344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.192401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.192432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:04.406 [2024-11-26 23:43:52.194136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.194199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.194215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.194233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.406 [2024-11-26 23:43:52.213600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:04.406 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:04.406 [2024-11-26 23:43:52.214745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.214851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.214874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.214895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:04.406 [2024-11-26 23:43:52.216265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.216312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.216333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 [2024-11-26 23:43:52.216353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:04.406 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:04.406 EAL: Scan for (pci) bus failed. 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:04.406 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:04.406 Attaching to 0000:00:10.0 00:10:04.406 Attached to 0000:00:10.0 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.406 23:43:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:04.406 Attaching to 0000:00:11.0 00:10:04.406 Attached to 0000:00:11.0 00:10:05.342 QEMU NVMe Ctrl (12340 ): 3576 I/Os completed (+3576) 00:10:05.342 QEMU NVMe Ctrl (12341 ): 3409 I/Os completed (+3409) 00:10:05.342 00:10:06.288 QEMU NVMe Ctrl (12340 ): 6731 I/Os completed (+3155) 00:10:06.288 QEMU NVMe Ctrl (12341 ): 6560 I/Os completed (+3151) 00:10:06.288 00:10:07.680 QEMU NVMe Ctrl (12340 ): 9403 I/Os completed (+2672) 00:10:07.680 QEMU NVMe Ctrl (12341 ): 9236 I/Os completed (+2676) 00:10:07.680 00:10:08.253 QEMU NVMe Ctrl (12340 ): 11874 I/Os completed (+2471) 00:10:08.253 QEMU NVMe Ctrl (12341 ): 11704 I/Os completed (+2468) 00:10:08.253 00:10:09.637 QEMU NVMe Ctrl (12340 ): 14422 I/Os completed (+2548) 00:10:09.637 QEMU NVMe Ctrl (12341 ): 14276 I/Os completed (+2572) 00:10:09.637 00:10:10.579 QEMU NVMe Ctrl (12340 ): 17091 I/Os completed (+2669) 00:10:10.579 QEMU NVMe Ctrl (12341 ): 16945 I/Os completed (+2669) 00:10:10.579 00:10:11.526 QEMU NVMe Ctrl (12340 ): 19623 I/Os completed (+2532) 00:10:11.526 QEMU NVMe Ctrl (12341 ): 19480 I/Os completed (+2535) 00:10:11.526 00:10:12.470 QEMU NVMe Ctrl (12340 ): 22103 I/Os completed (+2480) 00:10:12.470 QEMU NVMe Ctrl (12341 ): 21974 I/Os completed (+2494) 00:10:12.470 00:10:13.414 QEMU NVMe Ctrl (12340 ): 24679 I/Os completed (+2576) 00:10:13.414 QEMU NVMe Ctrl (12341 ): 24550 I/Os completed (+2576) 00:10:13.414 00:10:14.349 QEMU NVMe Ctrl (12340 ): 28374 I/Os completed (+3695) 00:10:14.349 QEMU NVMe Ctrl (12341 ): 28262 I/Os completed (+3712) 00:10:14.349 00:10:15.289 QEMU NVMe Ctrl (12340 ): 31818 I/Os completed (+3444) 00:10:15.289 QEMU NVMe Ctrl (12341 ): 31690 I/Os completed (+3428) 00:10:15.289 00:10:16.676 QEMU NVMe Ctrl (12340 ): 34360 I/Os completed (+2542) 00:10:16.676 QEMU NVMe Ctrl (12341 ): 34231 I/Os completed (+2541) 00:10:16.676 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.676 [2024-11-26 23:44:04.509862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:16.676 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:16.676 [2024-11-26 23:44:04.511332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.511515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.511561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.511687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:16.676 [2024-11-26 23:44:04.513360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.513499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.513535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.513602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.676 [2024-11-26 23:44:04.536417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:16.676 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:16.676 [2024-11-26 23:44:04.537647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.537786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.537851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.537882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:16.676 [2024-11-26 23:44:04.539244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.539381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.539428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 [2024-11-26 23:44:04.539458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.676 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:16.676 EAL: Scan for (pci) bus failed. 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:16.676 Attaching to 0000:00:10.0 00:10:16.676 Attached to 0000:00:10.0 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:16.676 23:44:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:16.676 Attaching to 0000:00:11.0 00:10:16.676 Attached to 0000:00:11.0 00:10:17.249 QEMU NVMe Ctrl (12340 ): 1648 I/Os completed (+1648) 00:10:17.249 QEMU NVMe Ctrl (12341 ): 1452 I/Os completed (+1452) 00:10:17.249 00:10:18.636 QEMU NVMe Ctrl (12340 ): 4384 I/Os completed (+2736) 00:10:18.636 QEMU NVMe Ctrl (12341 ): 4195 I/Os completed (+2743) 00:10:18.636 00:10:19.579 QEMU NVMe Ctrl (12340 ): 6914 I/Os completed (+2530) 00:10:19.579 QEMU NVMe Ctrl (12341 ): 6731 I/Os completed (+2536) 00:10:19.579 00:10:20.519 QEMU NVMe Ctrl (12340 ): 9354 I/Os completed (+2440) 00:10:20.520 QEMU NVMe Ctrl (12341 ): 9189 I/Os completed (+2458) 00:10:20.520 00:10:21.463 QEMU NVMe Ctrl (12340 ): 11906 I/Os completed (+2552) 00:10:21.463 QEMU NVMe Ctrl (12341 ): 11768 I/Os completed (+2579) 00:10:21.463 00:10:22.405 QEMU NVMe Ctrl (12340 ): 14433 I/Os completed (+2527) 00:10:22.405 QEMU NVMe Ctrl (12341 ): 14328 I/Os completed (+2560) 00:10:22.405 00:10:23.363 QEMU NVMe Ctrl (12340 ): 16988 I/Os completed (+2555) 00:10:23.363 QEMU NVMe Ctrl (12341 ): 16870 I/Os completed (+2542) 00:10:23.363 00:10:24.305 QEMU NVMe Ctrl (12340 ): 20881 I/Os completed (+3893) 00:10:24.305 QEMU NVMe Ctrl (12341 ): 20764 I/Os completed (+3894) 00:10:24.305 00:10:25.248 QEMU NVMe Ctrl (12340 ): 25709 I/Os completed (+4828) 00:10:25.248 QEMU NVMe Ctrl (12341 ): 25584 I/Os completed (+4820) 00:10:25.248 00:10:26.633 QEMU NVMe Ctrl (12340 ): 30552 I/Os completed (+4843) 00:10:26.633 QEMU NVMe Ctrl (12341 ): 30422 I/Os completed (+4838) 00:10:26.633 00:10:27.583 QEMU NVMe Ctrl (12340 ): 35408 I/Os completed (+4856) 00:10:27.583 QEMU NVMe Ctrl (12341 ): 35295 I/Os completed (+4873) 00:10:27.583 00:10:28.524 QEMU NVMe Ctrl (12340 ): 40288 I/Os completed (+4880) 00:10:28.525 QEMU NVMe Ctrl (12341 ): 40175 I/Os completed (+4880) 00:10:28.525 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:28.786 [2024-11-26 23:44:16.792000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:28.786 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:28.786 [2024-11-26 23:44:16.792834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.792863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.792877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.792892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:28.786 [2024-11-26 23:44:16.794215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.794252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.794267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.794278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:28.786 [2024-11-26 23:44:16.813091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:28.786 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:28.786 [2024-11-26 23:44:16.813787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.813833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.813848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.813859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:28.786 [2024-11-26 23:44:16.814643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.814664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.814675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 [2024-11-26 23:44:16.814684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:28.786 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:28.786 EAL: Scan for (pci) bus failed. 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.786 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:29.046 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:29.046 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:29.046 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:29.046 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:29.046 23:44:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:29.046 Attaching to 0000:00:10.0 00:10:29.046 Attached to 0000:00:10.0 00:10:29.046 23:44:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:29.046 23:44:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:29.046 23:44:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:29.046 Attaching to 0000:00:11.0 00:10:29.046 Attached to 0000:00:11.0 00:10:29.046 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:29.046 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:29.046 [2024-11-26 23:44:17.077806] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:41.287 23:44:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:41.287 23:44:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.287 23:44:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.89 00:10:41.287 23:44:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.89 00:10:41.287 23:44:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:41.287 23:44:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.89 00:10:41.287 23:44:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.89 2 00:10:41.287 remove_attach_helper took 42.89s to complete (handling 2 nvme drive(s)) 23:44:29 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78242 00:10:47.870 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78242) - No such process 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78242 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78790 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78790 00:10:47.870 23:44:35 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:47.870 23:44:35 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78790 ']' 00:10:47.870 23:44:35 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.870 23:44:35 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.870 23:44:35 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.870 23:44:35 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.870 23:44:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.870 [2024-11-26 23:44:35.170999] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:10:47.870 [2024-11-26 23:44:35.171152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78790 ] 00:10:47.870 [2024-11-26 23:44:35.319140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.870 [2024-11-26 23:44:35.347460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:48.132 23:44:36 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:48.132 23:44:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.720 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.721 23:44:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.721 23:44:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.721 23:44:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:54.721 [2024-11-26 23:44:42.132525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:54.721 [2024-11-26 23:44:42.133666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-26 23:44:42.133700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-26 23:44:42.133713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 [2024-11-26 23:44:42.133724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-26 23:44:42.133733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-26 23:44:42.133740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 [2024-11-26 23:44:42.133749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-26 23:44:42.133756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-26 23:44:42.133764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 [2024-11-26 23:44:42.133770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-26 23:44:42.133778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-26 23:44:42.133784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 [2024-11-26 23:44:42.532518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:54.721 [2024-11-26 23:44:42.533568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-26 23:44:42.533599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-26 23:44:42.533608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 [2024-11-26 23:44:42.533619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-26 23:44:42.533626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-26 23:44:42.533634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 [2024-11-26 23:44:42.533641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-26 23:44:42.533648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-26 23:44:42.533655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 [2024-11-26 23:44:42.533665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-26 23:44:42.533672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-26 23:44:42.533680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.721 23:44:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.721 23:44:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.721 23:44:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.721 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.982 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.982 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.982 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.982 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.982 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.982 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.982 23:44:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.224 23:44:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.224 23:44:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.224 23:44:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:07.224 23:44:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.224 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.224 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.224 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.224 23:44:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.224 23:44:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.224 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.224 23:44:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.224 [2024-11-26 23:44:55.032733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:07.224 [2024-11-26 23:44:55.034303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.224 [2024-11-26 23:44:55.034349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.224 [2024-11-26 23:44:55.034367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.224 [2024-11-26 23:44:55.034384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.224 [2024-11-26 23:44:55.034396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.224 [2024-11-26 23:44:55.034405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.224 [2024-11-26 23:44:55.034415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.224 [2024-11-26 23:44:55.034424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.224 [2024-11-26 23:44:55.034435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.224 [2024-11-26 23:44:55.034443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.224 [2024-11-26 23:44:55.034454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.224 [2024-11-26 23:44:55.034463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.224 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:07.224 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:07.486 [2024-11-26 23:44:55.432764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:07.486 [2024-11-26 23:44:55.434442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.486 [2024-11-26 23:44:55.434501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.486 [2024-11-26 23:44:55.434517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.486 [2024-11-26 23:44:55.434536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.486 [2024-11-26 23:44:55.434545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.486 [2024-11-26 23:44:55.434558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.486 [2024-11-26 23:44:55.434568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.486 [2024-11-26 23:44:55.434580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.486 [2024-11-26 23:44:55.434588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.486 [2024-11-26 23:44:55.434602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.486 [2024-11-26 23:44:55.434612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.486 [2024-11-26 23:44:55.434623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.486 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:07.486 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.486 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.486 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.486 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.486 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.486 23:44:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.486 23:44:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.486 23:44:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.487 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:07.487 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.748 23:44:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.989 23:45:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.989 23:45:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.989 23:45:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.989 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.989 23:45:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.989 23:45:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.989 23:45:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.989 [2024-11-26 23:45:07.932970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:19.990 [2024-11-26 23:45:07.934023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.990 [2024-11-26 23:45:07.934055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.990 [2024-11-26 23:45:07.934069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.990 [2024-11-26 23:45:07.934082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.990 [2024-11-26 23:45:07.934090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.990 [2024-11-26 23:45:07.934097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.990 [2024-11-26 23:45:07.934105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.990 [2024-11-26 23:45:07.934111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.990 [2024-11-26 23:45:07.934120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.990 [2024-11-26 23:45:07.934126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.990 [2024-11-26 23:45:07.934134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.990 [2024-11-26 23:45:07.934140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.990 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:19.990 23:45:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:20.251 [2024-11-26 23:45:08.332972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:20.251 [2024-11-26 23:45:08.333944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.251 [2024-11-26 23:45:08.333976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.251 [2024-11-26 23:45:08.333986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.251 [2024-11-26 23:45:08.333997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.251 [2024-11-26 23:45:08.334004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.251 [2024-11-26 23:45:08.334014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.251 [2024-11-26 23:45:08.334020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.251 [2024-11-26 23:45:08.334028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.251 [2024-11-26 23:45:08.334034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.251 [2024-11-26 23:45:08.334043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.251 [2024-11-26 23:45:08.334049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.251 [2024-11-26 23:45:08.334057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.514 23:45:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.514 23:45:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.514 23:45:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.514 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:20.776 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:20.776 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.776 23:45:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:33.091 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.71 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.71 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.71 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.71 2 00:11:33.092 remove_attach_helper took 44.71s to complete (handling 2 nvme drive(s)) 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:33.092 23:45:20 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:33.092 23:45:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.688 23:45:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.688 23:45:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.688 23:45:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:39.688 23:45:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.688 [2024-11-26 23:45:26.877059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:39.688 [2024-11-26 23:45:26.877890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.688 [2024-11-26 23:45:26.877917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.689 [2024-11-26 23:45:26.877931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.689 [2024-11-26 23:45:26.877943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.689 [2024-11-26 23:45:26.877952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.689 [2024-11-26 23:45:26.877959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.689 [2024-11-26 23:45:26.877967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.689 [2024-11-26 23:45:26.877973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.689 [2024-11-26 23:45:26.877983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.689 [2024-11-26 23:45:26.877990] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.689 [2024-11-26 23:45:26.877998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.689 [2024-11-26 23:45:26.878005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.689 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:39.689 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.689 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.689 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.689 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.689 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.689 23:45:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.689 23:45:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.689 23:45:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.689 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:39.689 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.689 [2024-11-26 23:45:27.577054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:39.689 [2024-11-26 23:45:27.577779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.689 [2024-11-26 23:45:27.577820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.689 [2024-11-26 23:45:27.577830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.689 [2024-11-26 23:45:27.577843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.689 [2024-11-26 23:45:27.577850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.689 [2024-11-26 23:45:27.577858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.689 [2024-11-26 23:45:27.577865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.689 [2024-11-26 23:45:27.577873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.689 [2024-11-26 23:45:27.577880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.689 [2024-11-26 23:45:27.577887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.689 [2024-11-26 23:45:27.577894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.689 [2024-11-26 23:45:27.577904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.949 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:39.949 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.949 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.949 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.949 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.949 23:45:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.949 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.949 23:45:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.949 23:45:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.949 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.949 23:45:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.949 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.949 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.949 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:40.209 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:40.209 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.209 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.209 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.209 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.209 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.209 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.209 23:45:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.456 23:45:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.456 23:45:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.456 23:45:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.456 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.457 23:45:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.457 23:45:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.457 23:45:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:52.457 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.457 [2024-11-26 23:45:40.377272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:52.457 [2024-11-26 23:45:40.378106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.457 [2024-11-26 23:45:40.378136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.457 [2024-11-26 23:45:40.378152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.457 [2024-11-26 23:45:40.378166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.457 [2024-11-26 23:45:40.378175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.457 [2024-11-26 23:45:40.378183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.457 [2024-11-26 23:45:40.378192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.457 [2024-11-26 23:45:40.378198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.457 [2024-11-26 23:45:40.378207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.457 [2024-11-26 23:45:40.378213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.457 [2024-11-26 23:45:40.378221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.457 [2024-11-26 23:45:40.378227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.724 [2024-11-26 23:45:40.777278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.724 [2024-11-26 23:45:40.778078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.724 [2024-11-26 23:45:40.778113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.724 [2024-11-26 23:45:40.778124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.724 [2024-11-26 23:45:40.778138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.724 [2024-11-26 23:45:40.778145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.724 [2024-11-26 23:45:40.778154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.724 [2024-11-26 23:45:40.778162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.724 [2024-11-26 23:45:40.778170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.724 [2024-11-26 23:45:40.778177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.724 [2024-11-26 23:45:40.778185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.724 [2024-11-26 23:45:40.778191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.724 [2024-11-26 23:45:40.778199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.982 23:45:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.982 23:45:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 23:45:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.982 23:45:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.982 23:45:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.982 23:45:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.982 23:45:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.982 23:45:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.982 23:45:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.982 23:45:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.240 23:45:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.240 23:45:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.430 23:45:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.430 23:45:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.430 23:45:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.430 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.430 [2024-11-26 23:45:53.177515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:05.430 [2024-11-26 23:45:53.178450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.430 [2024-11-26 23:45:53.178483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.430 [2024-11-26 23:45:53.178497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.430 [2024-11-26 23:45:53.178512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.430 [2024-11-26 23:45:53.178524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.431 [2024-11-26 23:45:53.178531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.431 [2024-11-26 23:45:53.178540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.431 [2024-11-26 23:45:53.178547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.431 [2024-11-26 23:45:53.178555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.431 [2024-11-26 23:45:53.178562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.431 [2024-11-26 23:45:53.178570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.431 [2024-11-26 23:45:53.178577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.431 23:45:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.431 23:45:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.431 23:45:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:05.431 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:05.691 [2024-11-26 23:45:53.577516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:05.691 [2024-11-26 23:45:53.578305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.691 [2024-11-26 23:45:53.578338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.691 [2024-11-26 23:45:53.578350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.691 [2024-11-26 23:45:53.578364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.691 [2024-11-26 23:45:53.578372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.691 [2024-11-26 23:45:53.578381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.691 [2024-11-26 23:45:53.578388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.691 [2024-11-26 23:45:53.578398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.691 [2024-11-26 23:45:53.578405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.691 [2024-11-26 23:45:53.578413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.691 [2024-11-26 23:45:53.578419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.691 [2024-11-26 23:45:53.578427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.691 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:05.691 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.691 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.691 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.691 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.691 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.691 23:45:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.691 23:45:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.691 23:45:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.691 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:05.691 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.947 23:45:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.166 23:46:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:18.166 23:46:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:18.166 23:46:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:18.166 23:46:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.166 23:46:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.166 23:46:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.166 23:46:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.166 23:46:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.166 23:46:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.166 23:46:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.23 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.23 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:18.166 23:46:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.23 00:12:18.166 23:46:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.23 2 00:12:18.166 remove_attach_helper took 45.23s to complete (handling 2 nvme drive(s)) 23:46:06 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:18.166 23:46:06 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78790 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78790 ']' 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78790 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78790 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.166 killing process with pid 78790 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78790' 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78790 00:12:18.166 23:46:06 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78790 00:12:18.426 23:46:06 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:18.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.339 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.339 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.339 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.339 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.339 00:12:19.339 real 2m28.962s 00:12:19.339 user 1m49.965s 00:12:19.340 sys 0m17.467s 00:12:19.340 23:46:07 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.340 ************************************ 00:12:19.340 END TEST sw_hotplug 00:12:19.340 ************************************ 00:12:19.340 23:46:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.340 23:46:07 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:19.340 23:46:07 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:19.340 23:46:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.340 23:46:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.340 23:46:07 -- common/autotest_common.sh@10 -- # set +x 00:12:19.340 ************************************ 00:12:19.340 START TEST nvme_xnvme 00:12:19.340 ************************************ 00:12:19.340 23:46:07 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:19.604 * Looking for test storage... 00:12:19.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.604 23:46:07 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:19.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.604 --rc genhtml_branch_coverage=1 00:12:19.604 --rc genhtml_function_coverage=1 00:12:19.604 --rc genhtml_legend=1 00:12:19.604 --rc geninfo_all_blocks=1 00:12:19.604 --rc geninfo_unexecuted_blocks=1 00:12:19.604 00:12:19.604 ' 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:19.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.604 --rc genhtml_branch_coverage=1 00:12:19.604 --rc genhtml_function_coverage=1 00:12:19.604 --rc genhtml_legend=1 00:12:19.604 --rc geninfo_all_blocks=1 00:12:19.604 --rc geninfo_unexecuted_blocks=1 00:12:19.604 00:12:19.604 ' 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:19.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.604 --rc genhtml_branch_coverage=1 00:12:19.604 --rc genhtml_function_coverage=1 00:12:19.604 --rc genhtml_legend=1 00:12:19.604 --rc geninfo_all_blocks=1 00:12:19.604 --rc geninfo_unexecuted_blocks=1 00:12:19.604 00:12:19.604 ' 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:19.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.604 --rc genhtml_branch_coverage=1 00:12:19.604 --rc genhtml_function_coverage=1 00:12:19.604 --rc genhtml_legend=1 00:12:19.604 --rc geninfo_all_blocks=1 00:12:19.604 --rc geninfo_unexecuted_blocks=1 00:12:19.604 00:12:19.604 ' 00:12:19.604 23:46:07 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:19.604 23:46:07 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:19.604 23:46:07 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:19.604 23:46:07 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:19.605 23:46:07 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:19.605 23:46:07 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:19.605 23:46:07 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:19.605 #define SPDK_CONFIG_H 00:12:19.605 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:19.605 #define SPDK_CONFIG_APPS 1 00:12:19.605 #define SPDK_CONFIG_ARCH native 00:12:19.605 #define SPDK_CONFIG_ASAN 1 00:12:19.605 #undef SPDK_CONFIG_AVAHI 00:12:19.605 #undef SPDK_CONFIG_CET 00:12:19.605 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:19.605 #define SPDK_CONFIG_COVERAGE 1 00:12:19.605 #define SPDK_CONFIG_CROSS_PREFIX 00:12:19.605 #undef SPDK_CONFIG_CRYPTO 00:12:19.605 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:19.605 #undef SPDK_CONFIG_CUSTOMOCF 00:12:19.605 #undef SPDK_CONFIG_DAOS 00:12:19.605 #define SPDK_CONFIG_DAOS_DIR 00:12:19.605 #define SPDK_CONFIG_DEBUG 1 00:12:19.605 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:19.605 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:19.605 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:19.605 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:19.605 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:19.605 #undef SPDK_CONFIG_DPDK_UADK 00:12:19.605 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:19.605 #define SPDK_CONFIG_EXAMPLES 1 00:12:19.605 #undef SPDK_CONFIG_FC 00:12:19.605 #define SPDK_CONFIG_FC_PATH 00:12:19.605 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:19.605 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:19.605 #define SPDK_CONFIG_FSDEV 1 00:12:19.605 #undef SPDK_CONFIG_FUSE 00:12:19.605 #undef SPDK_CONFIG_FUZZER 00:12:19.605 #define SPDK_CONFIG_FUZZER_LIB 00:12:19.605 #undef SPDK_CONFIG_GOLANG 00:12:19.605 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:19.605 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:19.605 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:19.605 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:19.605 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:19.605 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:19.605 #undef SPDK_CONFIG_HAVE_LZ4 00:12:19.605 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:19.605 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:19.605 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:19.605 #define SPDK_CONFIG_IDXD 1 00:12:19.605 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:19.605 #undef SPDK_CONFIG_IPSEC_MB 00:12:19.605 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:19.605 #define SPDK_CONFIG_ISAL 1 00:12:19.605 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:19.605 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:19.605 #define SPDK_CONFIG_LIBDIR 00:12:19.605 #undef SPDK_CONFIG_LTO 00:12:19.605 #define SPDK_CONFIG_MAX_LCORES 128 00:12:19.605 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:19.605 #define SPDK_CONFIG_NVME_CUSE 1 00:12:19.605 #undef SPDK_CONFIG_OCF 00:12:19.605 #define SPDK_CONFIG_OCF_PATH 00:12:19.605 #define SPDK_CONFIG_OPENSSL_PATH 00:12:19.605 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:19.605 #define SPDK_CONFIG_PGO_DIR 00:12:19.605 #undef SPDK_CONFIG_PGO_USE 00:12:19.606 #define SPDK_CONFIG_PREFIX /usr/local 00:12:19.606 #undef SPDK_CONFIG_RAID5F 00:12:19.606 #undef SPDK_CONFIG_RBD 00:12:19.606 #define SPDK_CONFIG_RDMA 1 00:12:19.606 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:19.606 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:19.606 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:19.606 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:19.606 #define SPDK_CONFIG_SHARED 1 00:12:19.606 #undef SPDK_CONFIG_SMA 00:12:19.606 #define SPDK_CONFIG_TESTS 1 00:12:19.606 #undef SPDK_CONFIG_TSAN 00:12:19.606 #define SPDK_CONFIG_UBLK 1 00:12:19.606 #define SPDK_CONFIG_UBSAN 1 00:12:19.606 #undef SPDK_CONFIG_UNIT_TESTS 00:12:19.606 #undef SPDK_CONFIG_URING 00:12:19.606 #define SPDK_CONFIG_URING_PATH 00:12:19.606 #undef SPDK_CONFIG_URING_ZNS 00:12:19.606 #undef SPDK_CONFIG_USDT 00:12:19.606 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:19.606 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:19.606 #undef SPDK_CONFIG_VFIO_USER 00:12:19.606 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:19.606 #define SPDK_CONFIG_VHOST 1 00:12:19.606 #define SPDK_CONFIG_VIRTIO 1 00:12:19.606 #undef SPDK_CONFIG_VTUNE 00:12:19.606 #define SPDK_CONFIG_VTUNE_DIR 00:12:19.606 #define SPDK_CONFIG_WERROR 1 00:12:19.606 #define SPDK_CONFIG_WPDK_DIR 00:12:19.606 #define SPDK_CONFIG_XNVME 1 00:12:19.606 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:19.606 23:46:07 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.606 23:46:07 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.606 23:46:07 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.606 23:46:07 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.606 23:46:07 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.606 23:46:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.606 23:46:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.606 23:46:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.606 23:46:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.606 23:46:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:19.606 23:46:07 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:19.606 23:46:07 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.607 23:46:07 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80132 ]] 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80132 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.cEzU3z 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.cEzU3z/tests/xnvme /tmp/spdk.cEzU3z 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13382885376 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6199549952 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.608 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261960704 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13382885376 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6199549952 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98517315584 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1185464320 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:19.609 * Looking for test storage... 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13382885376 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:19.609 23:46:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:19.870 23:46:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.870 23:46:07 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.870 23:46:07 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.870 23:46:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.870 --rc genhtml_branch_coverage=1 00:12:19.870 --rc genhtml_function_coverage=1 00:12:19.870 --rc genhtml_legend=1 00:12:19.870 --rc geninfo_all_blocks=1 00:12:19.870 --rc geninfo_unexecuted_blocks=1 00:12:19.870 00:12:19.870 ' 00:12:19.870 23:46:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:19.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.870 --rc genhtml_branch_coverage=1 00:12:19.870 --rc genhtml_function_coverage=1 00:12:19.870 --rc genhtml_legend=1 00:12:19.870 --rc geninfo_all_blocks=1 00:12:19.871 --rc geninfo_unexecuted_blocks=1 00:12:19.871 00:12:19.871 ' 00:12:19.871 23:46:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:19.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.871 --rc genhtml_branch_coverage=1 00:12:19.871 --rc genhtml_function_coverage=1 00:12:19.871 --rc genhtml_legend=1 00:12:19.871 --rc geninfo_all_blocks=1 00:12:19.871 --rc geninfo_unexecuted_blocks=1 00:12:19.871 00:12:19.871 ' 00:12:19.871 23:46:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:19.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.871 --rc genhtml_branch_coverage=1 00:12:19.871 --rc genhtml_function_coverage=1 00:12:19.871 --rc genhtml_legend=1 00:12:19.871 --rc geninfo_all_blocks=1 00:12:19.871 --rc geninfo_unexecuted_blocks=1 00:12:19.871 00:12:19.871 ' 00:12:19.871 23:46:07 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.871 23:46:07 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.871 23:46:07 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.871 23:46:07 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.871 23:46:07 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.871 23:46:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.871 23:46:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.871 23:46:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.871 23:46:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.871 23:46:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:19.871 23:46:07 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:20.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.394 Waiting for block devices as requested 00:12:20.394 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.394 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.394 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.655 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.956 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:25.956 23:46:13 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:25.956 23:46:13 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:25.956 23:46:14 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:26.218 23:46:14 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:26.218 23:46:14 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:26.218 No valid GPT data, bailing 00:12:26.218 23:46:14 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:26.218 23:46:14 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:26.218 23:46:14 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:26.218 23:46:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:26.218 23:46:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.218 23:46:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.218 23:46:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.218 ************************************ 00:12:26.218 START TEST xnvme_rpc 00:12:26.218 ************************************ 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80526 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80526 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80526 ']' 00:12:26.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.218 23:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.499 [2024-11-26 23:46:14.394519] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:12:26.499 [2024-11-26 23:46:14.394702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80526 ] 00:12:26.499 [2024-11-26 23:46:14.543262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.499 [2024-11-26 23:46:14.584470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.440 xnvme_bdev 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80526 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80526 ']' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80526 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80526 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.440 killing process with pid 80526 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80526' 00:12:27.440 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80526 00:12:27.441 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80526 00:12:28.012 00:12:28.012 real 0m1.629s 00:12:28.012 user 0m1.567s 00:12:28.012 sys 0m0.520s 00:12:28.012 ************************************ 00:12:28.012 END TEST xnvme_rpc 00:12:28.012 ************************************ 00:12:28.012 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.012 23:46:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.012 23:46:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:28.012 23:46:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.012 23:46:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.012 23:46:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.012 ************************************ 00:12:28.012 START TEST xnvme_bdevperf 00:12:28.012 ************************************ 00:12:28.012 23:46:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:28.012 23:46:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:28.012 23:46:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:28.012 23:46:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:28.012 23:46:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:28.012 23:46:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:28.012 23:46:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:28.012 23:46:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:28.012 { 00:12:28.012 "subsystems": [ 00:12:28.012 { 00:12:28.012 "subsystem": "bdev", 00:12:28.012 "config": [ 00:12:28.012 { 00:12:28.012 "params": { 00:12:28.012 "io_mechanism": "libaio", 00:12:28.012 "conserve_cpu": false, 00:12:28.012 "filename": "/dev/nvme0n1", 00:12:28.012 "name": "xnvme_bdev" 00:12:28.012 }, 00:12:28.012 "method": "bdev_xnvme_create" 00:12:28.012 }, 00:12:28.012 { 00:12:28.012 "method": "bdev_wait_for_examine" 00:12:28.012 } 00:12:28.012 ] 00:12:28.012 } 00:12:28.012 ] 00:12:28.012 } 00:12:28.012 [2024-11-26 23:46:16.061342] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:12:28.012 [2024-11-26 23:46:16.061591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80583 ] 00:12:28.273 [2024-11-26 23:46:16.199460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.273 [2024-11-26 23:46:16.224812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.273 Running I/O for 5 seconds... 00:12:30.596 28993.00 IOPS, 113.25 MiB/s [2024-11-26T23:46:19.671Z] 26433.50 IOPS, 103.26 MiB/s [2024-11-26T23:46:20.616Z] 26271.67 IOPS, 102.62 MiB/s [2024-11-26T23:46:21.560Z] 26119.50 IOPS, 102.03 MiB/s 00:12:33.429 Latency(us) 00:12:33.429 [2024-11-26T23:46:21.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.429 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:33.429 xnvme_bdev : 5.00 25946.15 101.35 0.00 0.00 2461.61 513.58 7259.37 00:12:33.429 [2024-11-26T23:46:21.560Z] =================================================================================================================== 00:12:33.429 [2024-11-26T23:46:21.560Z] Total : 25946.15 101.35 0.00 0.00 2461.61 513.58 7259.37 00:12:33.690 23:46:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.690 23:46:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:33.690 23:46:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:33.690 23:46:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:33.690 23:46:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:33.690 { 00:12:33.690 "subsystems": [ 00:12:33.690 { 00:12:33.690 "subsystem": "bdev", 00:12:33.690 "config": [ 00:12:33.690 { 00:12:33.690 "params": { 00:12:33.690 "io_mechanism": "libaio", 00:12:33.690 "conserve_cpu": false, 00:12:33.690 "filename": "/dev/nvme0n1", 00:12:33.690 "name": "xnvme_bdev" 00:12:33.690 }, 00:12:33.690 "method": "bdev_xnvme_create" 00:12:33.690 }, 00:12:33.690 { 00:12:33.690 "method": "bdev_wait_for_examine" 00:12:33.690 } 00:12:33.690 ] 00:12:33.690 } 00:12:33.690 ] 00:12:33.690 } 00:12:33.690 [2024-11-26 23:46:21.711677] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:12:33.690 [2024-11-26 23:46:21.712056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80653 ] 00:12:33.951 [2024-11-26 23:46:21.859130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.951 [2024-11-26 23:46:21.899678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.951 Running I/O for 5 seconds... 00:12:36.278 32965.00 IOPS, 128.77 MiB/s [2024-11-26T23:46:25.352Z] 33649.50 IOPS, 131.44 MiB/s [2024-11-26T23:46:26.306Z] 34534.33 IOPS, 134.90 MiB/s [2024-11-26T23:46:27.251Z] 30344.50 IOPS, 118.53 MiB/s [2024-11-26T23:46:27.251Z] 25365.20 IOPS, 99.08 MiB/s 00:12:39.120 Latency(us) 00:12:39.120 [2024-11-26T23:46:27.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.120 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:39.120 xnvme_bdev : 5.01 25313.05 98.88 0.00 0.00 2521.02 66.95 28029.24 00:12:39.120 [2024-11-26T23:46:27.251Z] =================================================================================================================== 00:12:39.120 [2024-11-26T23:46:27.251Z] Total : 25313.05 98.88 0.00 0.00 2521.02 66.95 28029.24 00:12:39.382 00:12:39.382 real 0m11.348s 00:12:39.382 user 0m4.240s 00:12:39.382 sys 0m5.686s 00:12:39.382 23:46:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.382 23:46:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:39.382 ************************************ 00:12:39.382 END TEST xnvme_bdevperf 00:12:39.382 ************************************ 00:12:39.382 23:46:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:39.382 23:46:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.382 23:46:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.382 23:46:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.382 ************************************ 00:12:39.382 START TEST xnvme_fio_plugin 00:12:39.382 ************************************ 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.382 23:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.382 { 00:12:39.382 "subsystems": [ 00:12:39.382 { 00:12:39.382 "subsystem": "bdev", 00:12:39.382 "config": [ 00:12:39.382 { 00:12:39.382 "params": { 00:12:39.382 "io_mechanism": "libaio", 00:12:39.382 "conserve_cpu": false, 00:12:39.382 "filename": "/dev/nvme0n1", 00:12:39.382 "name": "xnvme_bdev" 00:12:39.382 }, 00:12:39.382 "method": "bdev_xnvme_create" 00:12:39.382 }, 00:12:39.382 { 00:12:39.382 "method": "bdev_wait_for_examine" 00:12:39.382 } 00:12:39.382 ] 00:12:39.382 } 00:12:39.382 ] 00:12:39.382 } 00:12:39.644 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:39.644 fio-3.35 00:12:39.644 Starting 1 thread 00:12:46.269 00:12:46.269 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80761: Tue Nov 26 23:46:33 2024 00:12:46.269 read: IOPS=35.1k, BW=137MiB/s (144MB/s)(685MiB/5003msec) 00:12:46.269 slat (usec): min=4, max=1687, avg=20.31, stdev=86.57 00:12:46.269 clat (usec): min=100, max=14080, avg=1300.45, stdev=594.75 00:12:46.269 lat (usec): min=187, max=14084, avg=1320.77, stdev=589.35 00:12:46.269 clat percentiles (usec): 00:12:46.269 | 1.00th=[ 269], 5.00th=[ 490], 10.00th=[ 652], 20.00th=[ 857], 00:12:46.269 | 30.00th=[ 996], 40.00th=[ 1123], 50.00th=[ 1237], 60.00th=[ 1369], 00:12:46.269 | 70.00th=[ 1500], 80.00th=[ 1663], 90.00th=[ 1958], 95.00th=[ 2278], 00:12:46.269 | 99.00th=[ 3195], 99.50th=[ 3654], 99.90th=[ 5800], 99.95th=[ 6980], 00:12:46.269 | 99.99th=[ 8848] 00:12:46.269 bw ( KiB/s): min=132912, max=151256, per=99.76%, avg=139886.22, stdev=5352.36, samples=9 00:12:46.269 iops : min=33232, max=37810, avg=34971.56, stdev=1336.38, samples=9 00:12:46.269 lat (usec) : 250=0.76%, 500=4.53%, 750=8.97%, 1000=15.89% 00:12:46.269 lat (msec) : 2=60.89%, 4=8.65%, 10=0.32%, 20=0.01% 00:12:46.269 cpu : usr=40.76%, sys=48.74%, ctx=11, majf=0, minf=1065 00:12:46.269 IO depths : 1=0.2%, 2=0.7%, 4=2.4%, 8=8.1%, 16=23.5%, 32=62.8%, >=64=2.2% 00:12:46.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.269 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:46.269 issued rwts: total=175384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.269 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:46.269 00:12:46.269 Run status group 0 (all jobs): 00:12:46.269 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=685MiB (718MB), run=5003-5003msec 00:12:46.269 ----------------------------------------------------- 00:12:46.269 Suppressions used: 00:12:46.269 count bytes template 00:12:46.269 1 11 /usr/src/fio/parse.c 00:12:46.269 1 8 libtcmalloc_minimal.so 00:12:46.269 1 904 libcrypto.so 00:12:46.269 ----------------------------------------------------- 00:12:46.269 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:46.269 23:46:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.269 { 00:12:46.269 "subsystems": [ 00:12:46.269 { 00:12:46.269 "subsystem": "bdev", 00:12:46.269 "config": [ 00:12:46.269 { 00:12:46.269 "params": { 00:12:46.269 "io_mechanism": "libaio", 00:12:46.269 "conserve_cpu": false, 00:12:46.269 "filename": "/dev/nvme0n1", 00:12:46.269 "name": "xnvme_bdev" 00:12:46.269 }, 00:12:46.269 "method": "bdev_xnvme_create" 00:12:46.269 }, 00:12:46.269 { 00:12:46.269 "method": "bdev_wait_for_examine" 00:12:46.269 } 00:12:46.269 ] 00:12:46.269 } 00:12:46.269 ] 00:12:46.269 } 00:12:46.269 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:46.269 fio-3.35 00:12:46.269 Starting 1 thread 00:12:51.563 00:12:51.563 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80842: Tue Nov 26 23:46:39 2024 00:12:51.563 write: IOPS=19.6k, BW=76.5MiB/s (80.2MB/s)(383MiB/5007msec); 0 zone resets 00:12:51.563 slat (usec): min=4, max=1300, avg=14.63, stdev=52.76 00:12:51.563 clat (usec): min=10, max=21592, avg=3093.49, stdev=4190.47 00:12:51.563 lat (usec): min=53, max=21597, avg=3108.12, stdev=4188.85 00:12:51.563 clat percentiles (usec): 00:12:51.563 | 1.00th=[ 118], 5.00th=[ 251], 10.00th=[ 355], 20.00th=[ 537], 00:12:51.563 | 30.00th=[ 660], 40.00th=[ 750], 50.00th=[ 832], 60.00th=[ 996], 00:12:51.563 | 70.00th=[ 1434], 80.00th=[ 8029], 90.00th=[10814], 95.00th=[11994], 00:12:51.563 | 99.00th=[13829], 99.50th=[14353], 99.90th=[16450], 99.95th=[17957], 00:12:51.563 | 99.99th=[20055] 00:12:51.563 bw ( KiB/s): min=60040, max=95744, per=100.00%, avg=78358.40, stdev=16126.83, samples=10 00:12:51.563 iops : min=15010, max=23936, avg=19589.60, stdev=4031.71, samples=10 00:12:51.563 lat (usec) : 20=0.03%, 50=0.14%, 100=0.50%, 250=4.30%, 500=13.05% 00:12:51.563 lat (usec) : 750=22.54%, 1000=19.70% 00:12:51.563 lat (msec) : 2=12.99%, 4=1.59%, 10=11.79%, 20=13.38%, 50=0.01% 00:12:51.563 cpu : usr=72.03%, sys=16.64%, ctx=9, majf=0, minf=1065 00:12:51.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.4%, 16=3.4%, 32=86.0%, >=64=10.1% 00:12:51.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.564 complete : 0=0.0%, 4=94.5%, 8=1.7%, 16=2.0%, 32=1.4%, 64=0.4%, >=64=0.0% 00:12:51.564 issued rwts: total=0,97999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.564 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:51.564 00:12:51.564 Run status group 0 (all jobs): 00:12:51.564 WRITE: bw=76.5MiB/s (80.2MB/s), 76.5MiB/s-76.5MiB/s (80.2MB/s-80.2MB/s), io=383MiB (401MB), run=5007-5007msec 00:12:51.833 ----------------------------------------------------- 00:12:51.833 Suppressions used: 00:12:51.833 count bytes template 00:12:51.833 1 11 /usr/src/fio/parse.c 00:12:51.833 1 8 libtcmalloc_minimal.so 00:12:51.833 1 904 libcrypto.so 00:12:51.833 ----------------------------------------------------- 00:12:51.833 00:12:51.833 00:12:51.833 real 0m12.356s 00:12:51.833 user 0m6.957s 00:12:51.833 sys 0m3.922s 00:12:51.833 ************************************ 00:12:51.833 END TEST xnvme_fio_plugin 00:12:51.833 ************************************ 00:12:51.833 23:46:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.833 23:46:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:51.833 23:46:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:51.833 23:46:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:51.833 23:46:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:51.833 23:46:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:51.833 23:46:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:51.833 23:46:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.833 23:46:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.833 ************************************ 00:12:51.833 START TEST xnvme_rpc 00:12:51.833 ************************************ 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:51.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80930 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80930 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80930 ']' 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.833 23:46:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:51.833 [2024-11-26 23:46:39.934554] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:12:51.833 [2024-11-26 23:46:39.935278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80930 ] 00:12:52.094 [2024-11-26 23:46:40.084117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.094 [2024-11-26 23:46:40.124713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.668 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.668 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:52.668 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:52.668 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.668 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.931 xnvme_bdev 00:12:52.931 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.931 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:52.931 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.931 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:52.931 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.931 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.931 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80930 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80930 ']' 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80930 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80930 00:12:52.932 killing process with pid 80930 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80930' 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80930 00:12:52.932 23:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80930 00:12:53.505 ************************************ 00:12:53.505 END TEST xnvme_rpc 00:12:53.505 ************************************ 00:12:53.505 00:12:53.505 real 0m1.642s 00:12:53.505 user 0m1.589s 00:12:53.505 sys 0m0.512s 00:12:53.505 23:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.505 23:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.505 23:46:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:53.505 23:46:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.506 23:46:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.506 23:46:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.506 ************************************ 00:12:53.506 START TEST xnvme_bdevperf 00:12:53.506 ************************************ 00:12:53.506 23:46:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:53.506 23:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:53.506 23:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:53.506 23:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:53.506 23:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:53.506 23:46:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:53.506 23:46:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:53.506 23:46:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:53.506 { 00:12:53.506 "subsystems": [ 00:12:53.506 { 00:12:53.506 "subsystem": "bdev", 00:12:53.506 "config": [ 00:12:53.506 { 00:12:53.506 "params": { 00:12:53.506 "io_mechanism": "libaio", 00:12:53.506 "conserve_cpu": true, 00:12:53.506 "filename": "/dev/nvme0n1", 00:12:53.506 "name": "xnvme_bdev" 00:12:53.506 }, 00:12:53.506 "method": "bdev_xnvme_create" 00:12:53.506 }, 00:12:53.506 { 00:12:53.506 "method": "bdev_wait_for_examine" 00:12:53.506 } 00:12:53.506 ] 00:12:53.506 } 00:12:53.506 ] 00:12:53.506 } 00:12:53.766 [2024-11-26 23:46:41.637260] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:12:53.766 [2024-11-26 23:46:41.637577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80982 ] 00:12:53.766 [2024-11-26 23:46:41.787641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.767 [2024-11-26 23:46:41.828227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.028 Running I/O for 5 seconds... 00:12:55.916 34149.00 IOPS, 133.39 MiB/s [2024-11-26T23:46:45.045Z] 32192.50 IOPS, 125.75 MiB/s [2024-11-26T23:46:46.446Z] 32367.00 IOPS, 126.43 MiB/s [2024-11-26T23:46:47.020Z] 33002.75 IOPS, 128.92 MiB/s 00:12:58.889 Latency(us) 00:12:58.889 [2024-11-26T23:46:47.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.889 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:58.889 xnvme_bdev : 5.00 33661.31 131.49 0.00 0.00 1896.73 150.45 18350.08 00:12:58.889 [2024-11-26T23:46:47.020Z] =================================================================================================================== 00:12:58.889 [2024-11-26T23:46:47.020Z] Total : 33661.31 131.49 0.00 0.00 1896.73 150.45 18350.08 00:12:59.151 23:46:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:59.151 23:46:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:59.151 23:46:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:59.151 23:46:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:59.151 23:46:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 { 00:12:59.412 "subsystems": [ 00:12:59.412 { 00:12:59.412 "subsystem": "bdev", 00:12:59.412 "config": [ 00:12:59.412 { 00:12:59.412 "params": { 00:12:59.412 "io_mechanism": "libaio", 00:12:59.412 "conserve_cpu": true, 00:12:59.412 "filename": "/dev/nvme0n1", 00:12:59.412 "name": "xnvme_bdev" 00:12:59.412 }, 00:12:59.412 "method": "bdev_xnvme_create" 00:12:59.412 }, 00:12:59.412 { 00:12:59.412 "method": "bdev_wait_for_examine" 00:12:59.412 } 00:12:59.412 ] 00:12:59.412 } 00:12:59.412 ] 00:12:59.412 } 00:12:59.412 [2024-11-26 23:46:47.343618] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:12:59.412 [2024-11-26 23:46:47.343781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81052 ] 00:12:59.412 [2024-11-26 23:46:47.492011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.412 [2024-11-26 23:46:47.533770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.672 Running I/O for 5 seconds... 00:13:01.998 8972.00 IOPS, 35.05 MiB/s [2024-11-26T23:46:50.702Z] 8953.00 IOPS, 34.97 MiB/s [2024-11-26T23:46:52.090Z] 16211.67 IOPS, 63.33 MiB/s [2024-11-26T23:46:53.036Z] 19971.75 IOPS, 78.01 MiB/s [2024-11-26T23:46:53.036Z] 22497.40 IOPS, 87.88 MiB/s 00:13:04.905 Latency(us) 00:13:04.905 [2024-11-26T23:46:53.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.905 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:04.905 xnvme_bdev : 5.00 22488.46 87.85 0.00 0.00 2840.98 46.28 21878.94 00:13:04.905 [2024-11-26T23:46:53.036Z] =================================================================================================================== 00:13:04.905 [2024-11-26T23:46:53.036Z] Total : 22488.46 87.85 0.00 0.00 2840.98 46.28 21878.94 00:13:04.905 ************************************ 00:13:04.905 END TEST xnvme_bdevperf 00:13:04.905 ************************************ 00:13:04.905 00:13:04.905 real 0m11.406s 00:13:04.905 user 0m4.531s 00:13:04.905 sys 0m5.264s 00:13:04.905 23:46:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.905 23:46:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:04.905 23:46:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:04.905 23:46:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:04.905 23:46:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.905 23:46:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.166 ************************************ 00:13:05.166 START TEST xnvme_fio_plugin 00:13:05.166 ************************************ 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:05.166 23:46:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.166 { 00:13:05.166 "subsystems": [ 00:13:05.166 { 00:13:05.166 "subsystem": "bdev", 00:13:05.166 "config": [ 00:13:05.166 { 00:13:05.166 "params": { 00:13:05.166 "io_mechanism": "libaio", 00:13:05.166 "conserve_cpu": true, 00:13:05.166 "filename": "/dev/nvme0n1", 00:13:05.166 "name": "xnvme_bdev" 00:13:05.166 }, 00:13:05.166 "method": "bdev_xnvme_create" 00:13:05.166 }, 00:13:05.166 { 00:13:05.166 "method": "bdev_wait_for_examine" 00:13:05.166 } 00:13:05.166 ] 00:13:05.166 } 00:13:05.166 ] 00:13:05.166 } 00:13:05.166 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:05.166 fio-3.35 00:13:05.166 Starting 1 thread 00:13:11.757 00:13:11.757 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81160: Tue Nov 26 23:46:58 2024 00:13:11.757 read: IOPS=34.5k, BW=135MiB/s (141MB/s)(674MiB/5001msec) 00:13:11.757 slat (usec): min=4, max=3515, avg=19.51, stdev=90.19 00:13:11.757 clat (usec): min=109, max=5200, avg=1326.14, stdev=500.33 00:13:11.757 lat (usec): min=199, max=5343, avg=1345.65, stdev=491.89 00:13:11.757 clat percentiles (usec): 00:13:11.757 | 1.00th=[ 293], 5.00th=[ 553], 10.00th=[ 709], 20.00th=[ 914], 00:13:11.757 | 30.00th=[ 1074], 40.00th=[ 1205], 50.00th=[ 1319], 60.00th=[ 1434], 00:13:11.757 | 70.00th=[ 1549], 80.00th=[ 1696], 90.00th=[ 1926], 95.00th=[ 2114], 00:13:11.757 | 99.00th=[ 2802], 99.50th=[ 3163], 99.90th=[ 3916], 99.95th=[ 4080], 00:13:11.757 | 99.99th=[ 4621] 00:13:11.757 bw ( KiB/s): min=131944, max=144096, per=99.70%, avg=137676.44, stdev=4064.88, samples=9 00:13:11.757 iops : min=32986, max=36024, avg=34419.11, stdev=1016.22, samples=9 00:13:11.757 lat (usec) : 250=0.58%, 500=3.38%, 750=7.40%, 1000=13.89% 00:13:11.757 lat (msec) : 2=67.10%, 4=7.59%, 10=0.07% 00:13:11.757 cpu : usr=46.12%, sys=45.56%, ctx=10, majf=0, minf=1065 00:13:11.757 IO depths : 1=0.6%, 2=1.4%, 4=3.3%, 8=8.6%, 16=23.1%, 32=60.9%, >=64=2.1% 00:13:11.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.757 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:11.757 issued rwts: total=172656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.757 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:11.757 00:13:11.757 Run status group 0 (all jobs): 00:13:11.757 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=674MiB (707MB), run=5001-5001msec 00:13:11.757 ----------------------------------------------------- 00:13:11.757 Suppressions used: 00:13:11.757 count bytes template 00:13:11.757 1 11 /usr/src/fio/parse.c 00:13:11.757 1 8 libtcmalloc_minimal.so 00:13:11.757 1 904 libcrypto.so 00:13:11.757 ----------------------------------------------------- 00:13:11.757 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:11.757 23:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:11.757 { 00:13:11.757 "subsystems": [ 00:13:11.757 { 00:13:11.757 "subsystem": "bdev", 00:13:11.757 "config": [ 00:13:11.757 { 00:13:11.757 "params": { 00:13:11.757 "io_mechanism": "libaio", 00:13:11.757 "conserve_cpu": true, 00:13:11.757 "filename": "/dev/nvme0n1", 00:13:11.757 "name": "xnvme_bdev" 00:13:11.757 }, 00:13:11.757 "method": "bdev_xnvme_create" 00:13:11.757 }, 00:13:11.757 { 00:13:11.757 "method": "bdev_wait_for_examine" 00:13:11.757 } 00:13:11.757 ] 00:13:11.757 } 00:13:11.757 ] 00:13:11.757 } 00:13:11.757 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:11.758 fio-3.35 00:13:11.758 Starting 1 thread 00:13:17.056 00:13:17.056 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81246: Tue Nov 26 23:47:04 2024 00:13:17.056 write: IOPS=26.6k, BW=104MiB/s (109MB/s)(521MiB/5008msec); 0 zone resets 00:13:17.056 slat (usec): min=4, max=1892, avg=20.44, stdev=81.61 00:13:17.056 clat (usec): min=8, max=18883, avg=1916.40, stdev=2569.97 00:13:17.056 lat (usec): min=52, max=18887, avg=1936.84, stdev=2566.92 00:13:17.056 clat percentiles (usec): 00:13:17.056 | 1.00th=[ 190], 5.00th=[ 383], 10.00th=[ 545], 20.00th=[ 734], 00:13:17.056 | 30.00th=[ 898], 40.00th=[ 1074], 50.00th=[ 1237], 60.00th=[ 1401], 00:13:17.056 | 70.00th=[ 1565], 80.00th=[ 1811], 90.00th=[ 2573], 95.00th=[ 9634], 00:13:17.056 | 99.00th=[12649], 99.50th=[13304], 99.90th=[14877], 99.95th=[16319], 00:13:17.056 | 99.99th=[18482] 00:13:17.056 bw ( KiB/s): min=67872, max=144328, per=100.00%, avg=106648.00, stdev=33781.85, samples=10 00:13:17.056 iops : min=16968, max=36082, avg=26662.20, stdev=8445.83, samples=10 00:13:17.056 lat (usec) : 10=0.01%, 20=0.01%, 50=0.04%, 100=0.14%, 250=1.56% 00:13:17.056 lat (usec) : 500=6.81%, 750=12.55%, 1000=14.71% 00:13:17.056 lat (msec) : 2=48.96%, 4=7.45%, 10=3.25%, 20=4.53% 00:13:17.056 cpu : usr=53.86%, sys=35.79%, ctx=10, majf=0, minf=1065 00:13:17.056 IO depths : 1=0.4%, 2=0.9%, 4=2.4%, 8=6.7%, 16=18.2%, 32=66.9%, >=64=4.5% 00:13:17.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.056 complete : 0=0.0%, 4=96.9%, 8=0.6%, 16=0.7%, 32=0.5%, 64=1.3%, >=64=0.0% 00:13:17.056 issued rwts: total=0,133371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.056 00:13:17.056 Run status group 0 (all jobs): 00:13:17.056 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=521MiB (546MB), run=5008-5008msec 00:13:17.317 ----------------------------------------------------- 00:13:17.317 Suppressions used: 00:13:17.317 count bytes template 00:13:17.317 1 11 /usr/src/fio/parse.c 00:13:17.317 1 8 libtcmalloc_minimal.so 00:13:17.317 1 904 libcrypto.so 00:13:17.317 ----------------------------------------------------- 00:13:17.317 00:13:17.317 00:13:17.317 real 0m12.314s 00:13:17.317 user 0m6.277s 00:13:17.317 sys 0m4.741s 00:13:17.317 23:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.317 23:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:17.317 ************************************ 00:13:17.317 END TEST xnvme_fio_plugin 00:13:17.317 ************************************ 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:17.317 23:47:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:17.317 23:47:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.317 23:47:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.317 23:47:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.317 ************************************ 00:13:17.317 START TEST xnvme_rpc 00:13:17.317 ************************************ 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:17.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81321 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81321 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81321 ']' 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.317 23:47:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.578 [2024-11-26 23:47:05.513858] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:13:17.578 [2024-11-26 23:47:05.514277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81321 ] 00:13:17.578 [2024-11-26 23:47:05.660888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.578 [2024-11-26 23:47:05.702398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.523 xnvme_bdev 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81321 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81321 ']' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81321 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81321 00:13:18.523 killing process with pid 81321 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81321' 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81321 00:13:18.523 23:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81321 00:13:19.096 ************************************ 00:13:19.096 END TEST xnvme_rpc 00:13:19.096 ************************************ 00:13:19.096 00:13:19.096 real 0m1.643s 00:13:19.096 user 0m1.641s 00:13:19.096 sys 0m0.502s 00:13:19.096 23:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.096 23:47:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.096 23:47:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:19.096 23:47:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:19.096 23:47:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.096 23:47:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.096 ************************************ 00:13:19.096 START TEST xnvme_bdevperf 00:13:19.096 ************************************ 00:13:19.096 23:47:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:19.096 23:47:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:19.096 23:47:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:19.096 23:47:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:19.096 23:47:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:19.096 23:47:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:19.096 23:47:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:19.096 23:47:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:19.096 { 00:13:19.096 "subsystems": [ 00:13:19.096 { 00:13:19.096 "subsystem": "bdev", 00:13:19.096 "config": [ 00:13:19.096 { 00:13:19.096 "params": { 00:13:19.096 "io_mechanism": "io_uring", 00:13:19.096 "conserve_cpu": false, 00:13:19.096 "filename": "/dev/nvme0n1", 00:13:19.096 "name": "xnvme_bdev" 00:13:19.096 }, 00:13:19.096 "method": "bdev_xnvme_create" 00:13:19.096 }, 00:13:19.096 { 00:13:19.096 "method": "bdev_wait_for_examine" 00:13:19.096 } 00:13:19.096 ] 00:13:19.096 } 00:13:19.096 ] 00:13:19.096 } 00:13:19.096 [2024-11-26 23:47:07.203046] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:13:19.096 [2024-11-26 23:47:07.204247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81383 ] 00:13:19.358 [2024-11-26 23:47:07.363331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.358 [2024-11-26 23:47:07.404423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.619 Running I/O for 5 seconds... 00:13:21.504 32773.00 IOPS, 128.02 MiB/s [2024-11-26T23:47:10.579Z] 32387.00 IOPS, 126.51 MiB/s [2024-11-26T23:47:11.967Z] 32484.00 IOPS, 126.89 MiB/s [2024-11-26T23:47:12.908Z] 32312.50 IOPS, 126.22 MiB/s 00:13:24.777 Latency(us) 00:13:24.777 [2024-11-26T23:47:12.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.777 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:24.777 xnvme_bdev : 5.00 32340.65 126.33 0.00 0.00 1974.46 365.49 17946.78 00:13:24.777 [2024-11-26T23:47:12.908Z] =================================================================================================================== 00:13:24.777 [2024-11-26T23:47:12.908Z] Total : 32340.65 126.33 0.00 0.00 1974.46 365.49 17946.78 00:13:24.777 23:47:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.777 23:47:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:24.777 23:47:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:24.777 23:47:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:24.777 23:47:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:24.777 { 00:13:24.777 "subsystems": [ 00:13:24.777 { 00:13:24.777 "subsystem": "bdev", 00:13:24.777 "config": [ 00:13:24.777 { 00:13:24.778 "params": { 00:13:24.778 "io_mechanism": "io_uring", 00:13:24.778 "conserve_cpu": false, 00:13:24.778 "filename": "/dev/nvme0n1", 00:13:24.778 "name": "xnvme_bdev" 00:13:24.778 }, 00:13:24.778 "method": "bdev_xnvme_create" 00:13:24.778 }, 00:13:24.778 { 00:13:24.778 "method": "bdev_wait_for_examine" 00:13:24.778 } 00:13:24.778 ] 00:13:24.778 } 00:13:24.778 ] 00:13:24.778 } 00:13:24.778 [2024-11-26 23:47:12.891532] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:13:24.778 [2024-11-26 23:47:12.891695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81455 ] 00:13:25.037 [2024-11-26 23:47:13.040117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.037 [2024-11-26 23:47:13.080645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.298 Running I/O for 5 seconds... 00:13:27.228 10657.00 IOPS, 41.63 MiB/s [2024-11-26T23:47:16.301Z] 10603.50 IOPS, 41.42 MiB/s [2024-11-26T23:47:17.246Z] 10548.00 IOPS, 41.20 MiB/s [2024-11-26T23:47:18.252Z] 10553.50 IOPS, 41.22 MiB/s [2024-11-26T23:47:18.252Z] 10567.80 IOPS, 41.28 MiB/s 00:13:30.121 Latency(us) 00:13:30.121 [2024-11-26T23:47:18.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.121 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:30.121 xnvme_bdev : 5.01 10552.61 41.22 0.00 0.00 6054.23 70.10 23592.96 00:13:30.121 [2024-11-26T23:47:18.252Z] =================================================================================================================== 00:13:30.121 [2024-11-26T23:47:18.252Z] Total : 10552.61 41.22 0.00 0.00 6054.23 70.10 23592.96 00:13:30.382 00:13:30.382 real 0m11.370s 00:13:30.382 user 0m4.441s 00:13:30.382 sys 0m6.667s 00:13:30.382 23:47:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.382 ************************************ 00:13:30.382 END TEST xnvme_bdevperf 00:13:30.382 ************************************ 00:13:30.382 23:47:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 23:47:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:30.643 23:47:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:30.643 23:47:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.643 23:47:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 ************************************ 00:13:30.643 START TEST xnvme_fio_plugin 00:13:30.643 ************************************ 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:30.643 23:47:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:30.643 { 00:13:30.643 "subsystems": [ 00:13:30.643 { 00:13:30.643 "subsystem": "bdev", 00:13:30.643 "config": [ 00:13:30.643 { 00:13:30.643 "params": { 00:13:30.643 "io_mechanism": "io_uring", 00:13:30.643 "conserve_cpu": false, 00:13:30.643 "filename": "/dev/nvme0n1", 00:13:30.643 "name": "xnvme_bdev" 00:13:30.643 }, 00:13:30.643 "method": "bdev_xnvme_create" 00:13:30.643 }, 00:13:30.643 { 00:13:30.643 "method": "bdev_wait_for_examine" 00:13:30.643 } 00:13:30.643 ] 00:13:30.643 } 00:13:30.643 ] 00:13:30.643 } 00:13:30.904 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:30.904 fio-3.35 00:13:30.904 Starting 1 thread 00:13:36.201 00:13:36.201 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81558: Tue Nov 26 23:47:24 2024 00:13:36.201 read: IOPS=33.4k, BW=130MiB/s (137MB/s)(652MiB/5001msec) 00:13:36.201 slat (usec): min=2, max=207, avg= 3.89, stdev= 2.68 00:13:36.201 clat (usec): min=949, max=3730, avg=1758.26, stdev=318.43 00:13:36.201 lat (usec): min=952, max=3739, avg=1762.16, stdev=318.86 00:13:36.201 clat percentiles (usec): 00:13:36.201 | 1.00th=[ 1205], 5.00th=[ 1336], 10.00th=[ 1401], 20.00th=[ 1483], 00:13:36.201 | 30.00th=[ 1565], 40.00th=[ 1631], 50.00th=[ 1713], 60.00th=[ 1795], 00:13:36.201 | 70.00th=[ 1893], 80.00th=[ 2008], 90.00th=[ 2180], 95.00th=[ 2343], 00:13:36.201 | 99.00th=[ 2671], 99.50th=[ 2868], 99.90th=[ 3163], 99.95th=[ 3392], 00:13:36.201 | 99.99th=[ 3621] 00:13:36.201 bw ( KiB/s): min=127233, max=141312, per=100.00%, avg=134940.56, stdev=4783.03, samples=9 00:13:36.201 iops : min=31808, max=35328, avg=33735.11, stdev=1195.81, samples=9 00:13:36.201 lat (usec) : 1000=0.03% 00:13:36.201 lat (msec) : 2=79.52%, 4=20.46% 00:13:36.201 cpu : usr=30.80%, sys=67.20%, ctx=12, majf=0, minf=1063 00:13:36.201 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:36.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.201 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:36.201 issued rwts: total=166912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.201 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:36.201 00:13:36.201 Run status group 0 (all jobs): 00:13:36.201 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=652MiB (684MB), run=5001-5001msec 00:13:36.773 ----------------------------------------------------- 00:13:36.773 Suppressions used: 00:13:36.773 count bytes template 00:13:36.773 1 11 /usr/src/fio/parse.c 00:13:36.773 1 8 libtcmalloc_minimal.so 00:13:36.773 1 904 libcrypto.so 00:13:36.773 ----------------------------------------------------- 00:13:36.773 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:36.773 23:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:36.773 { 00:13:36.773 "subsystems": [ 00:13:36.773 { 00:13:36.773 "subsystem": "bdev", 00:13:36.773 "config": [ 00:13:36.773 { 00:13:36.773 "params": { 00:13:36.773 "io_mechanism": "io_uring", 00:13:36.773 "conserve_cpu": false, 00:13:36.773 "filename": "/dev/nvme0n1", 00:13:36.773 "name": "xnvme_bdev" 00:13:36.773 }, 00:13:36.773 "method": "bdev_xnvme_create" 00:13:36.773 }, 00:13:36.773 { 00:13:36.773 "method": "bdev_wait_for_examine" 00:13:36.773 } 00:13:36.773 ] 00:13:36.773 } 00:13:36.773 ] 00:13:36.773 } 00:13:36.773 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:36.773 fio-3.35 00:13:36.773 Starting 1 thread 00:13:43.365 00:13:43.365 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81644: Tue Nov 26 23:47:30 2024 00:13:43.365 write: IOPS=31.6k, BW=124MiB/s (130MB/s)(619MiB/5008msec); 0 zone resets 00:13:43.365 slat (nsec): min=2879, max=79989, avg=3966.57, stdev=2117.33 00:13:43.365 clat (usec): min=64, max=24815, avg=1883.23, stdev=1960.78 00:13:43.365 lat (usec): min=68, max=24819, avg=1887.20, stdev=1960.90 00:13:43.365 clat percentiles (usec): 00:13:43.365 | 1.00th=[ 465], 5.00th=[ 799], 10.00th=[ 1090], 20.00th=[ 1237], 00:13:43.365 | 30.00th=[ 1319], 40.00th=[ 1401], 50.00th=[ 1483], 60.00th=[ 1582], 00:13:43.365 | 70.00th=[ 1696], 80.00th=[ 1827], 90.00th=[ 2073], 95.00th=[ 2540], 00:13:43.365 | 99.00th=[11994], 99.50th=[13042], 99.90th=[17171], 99.95th=[20841], 00:13:43.365 | 99.99th=[23462] 00:13:43.365 bw ( KiB/s): min=70552, max=172088, per=100.00%, avg=126746.40, stdev=38227.25, samples=10 00:13:43.365 iops : min=17638, max=43022, avg=31686.60, stdev=9556.81, samples=10 00:13:43.365 lat (usec) : 100=0.01%, 250=0.16%, 500=1.01%, 750=2.89%, 1000=3.64% 00:13:43.365 lat (msec) : 2=80.00%, 4=7.80%, 10=2.01%, 20=2.42%, 50=0.06% 00:13:43.365 cpu : usr=32.06%, sys=66.69%, ctx=9, majf=0, minf=1063 00:13:43.365 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.5%, 16=21.5%, 32=55.7%, >=64=3.1% 00:13:43.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.365 complete : 0=0.0%, 4=97.8%, 8=0.3%, 16=0.3%, 32=0.2%, 64=1.3%, >=64=0.0% 00:13:43.365 issued rwts: total=0,158495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.365 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:43.365 00:13:43.365 Run status group 0 (all jobs): 00:13:43.365 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=619MiB (649MB), run=5008-5008msec 00:13:43.365 ----------------------------------------------------- 00:13:43.365 Suppressions used: 00:13:43.365 count bytes template 00:13:43.365 1 11 /usr/src/fio/parse.c 00:13:43.365 1 8 libtcmalloc_minimal.so 00:13:43.365 1 904 libcrypto.so 00:13:43.365 ----------------------------------------------------- 00:13:43.365 00:13:43.365 ************************************ 00:13:43.365 END TEST xnvme_fio_plugin 00:13:43.365 ************************************ 00:13:43.365 00:13:43.365 real 0m12.272s 00:13:43.365 user 0m4.446s 00:13:43.365 sys 0m7.349s 00:13:43.365 23:47:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.365 23:47:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.365 23:47:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:43.365 23:47:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:43.365 23:47:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:43.365 23:47:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:43.365 23:47:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.365 23:47:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.365 23:47:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.365 ************************************ 00:13:43.365 START TEST xnvme_rpc 00:13:43.365 ************************************ 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:43.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81719 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81719 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81719 ']' 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.365 23:47:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:43.365 [2024-11-26 23:47:31.011984] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:13:43.365 [2024-11-26 23:47:31.012380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81719 ] 00:13:43.365 [2024-11-26 23:47:31.157979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.365 [2024-11-26 23:47:31.198562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.946 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.946 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:43.946 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:43.946 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.946 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.946 xnvme_bdev 00:13:43.946 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.946 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.947 23:47:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81719 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81719 ']' 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81719 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81719 00:13:43.947 killing process with pid 81719 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81719' 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81719 00:13:43.947 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81719 00:13:44.521 ************************************ 00:13:44.521 END TEST xnvme_rpc 00:13:44.521 ************************************ 00:13:44.521 00:13:44.521 real 0m1.641s 00:13:44.521 user 0m1.605s 00:13:44.521 sys 0m0.529s 00:13:44.521 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.521 23:47:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.521 23:47:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:44.521 23:47:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.521 23:47:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.521 23:47:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.521 ************************************ 00:13:44.521 START TEST xnvme_bdevperf 00:13:44.521 ************************************ 00:13:44.521 23:47:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:44.521 23:47:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:44.521 23:47:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:44.521 23:47:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.521 23:47:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:44.521 23:47:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:44.521 23:47:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:44.521 23:47:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.783 { 00:13:44.783 "subsystems": [ 00:13:44.783 { 00:13:44.783 "subsystem": "bdev", 00:13:44.783 "config": [ 00:13:44.783 { 00:13:44.783 "params": { 00:13:44.783 "io_mechanism": "io_uring", 00:13:44.783 "conserve_cpu": true, 00:13:44.783 "filename": "/dev/nvme0n1", 00:13:44.783 "name": "xnvme_bdev" 00:13:44.783 }, 00:13:44.783 "method": "bdev_xnvme_create" 00:13:44.783 }, 00:13:44.783 { 00:13:44.783 "method": "bdev_wait_for_examine" 00:13:44.783 } 00:13:44.783 ] 00:13:44.783 } 00:13:44.783 ] 00:13:44.783 } 00:13:44.783 [2024-11-26 23:47:32.705313] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:13:44.783 [2024-11-26 23:47:32.705479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81782 ] 00:13:44.783 [2024-11-26 23:47:32.855428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.783 [2024-11-26 23:47:32.895765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.045 Running I/O for 5 seconds... 00:13:46.934 39166.00 IOPS, 152.99 MiB/s [2024-11-26T23:47:36.450Z] 37917.50 IOPS, 148.12 MiB/s [2024-11-26T23:47:37.397Z] 38112.00 IOPS, 148.88 MiB/s [2024-11-26T23:47:38.341Z] 38179.00 IOPS, 149.14 MiB/s [2024-11-26T23:47:38.341Z] 38421.60 IOPS, 150.08 MiB/s 00:13:50.210 Latency(us) 00:13:50.210 [2024-11-26T23:47:38.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.210 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:50.210 xnvme_bdev : 5.01 38356.30 149.83 0.00 0.00 1662.89 649.06 15526.99 00:13:50.210 [2024-11-26T23:47:38.341Z] =================================================================================================================== 00:13:50.210 [2024-11-26T23:47:38.341Z] Total : 38356.30 149.83 0.00 0.00 1662.89 649.06 15526.99 00:13:50.210 23:47:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.210 23:47:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:50.210 23:47:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:50.210 23:47:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:50.210 23:47:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.472 { 00:13:50.472 "subsystems": [ 00:13:50.472 { 00:13:50.472 "subsystem": "bdev", 00:13:50.472 "config": [ 00:13:50.472 { 00:13:50.472 "params": { 00:13:50.472 "io_mechanism": "io_uring", 00:13:50.472 "conserve_cpu": true, 00:13:50.472 "filename": "/dev/nvme0n1", 00:13:50.472 "name": "xnvme_bdev" 00:13:50.472 }, 00:13:50.472 "method": "bdev_xnvme_create" 00:13:50.472 }, 00:13:50.472 { 00:13:50.472 "method": "bdev_wait_for_examine" 00:13:50.472 } 00:13:50.472 ] 00:13:50.472 } 00:13:50.472 ] 00:13:50.472 } 00:13:50.472 [2024-11-26 23:47:38.387257] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:13:50.472 [2024-11-26 23:47:38.387419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81846 ] 00:13:50.472 [2024-11-26 23:47:38.540353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.472 [2024-11-26 23:47:38.581941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.734 Running I/O for 5 seconds... 00:13:52.620 14807.00 IOPS, 57.84 MiB/s [2024-11-26T23:47:42.139Z] 14568.50 IOPS, 56.91 MiB/s [2024-11-26T23:47:43.083Z] 14802.67 IOPS, 57.82 MiB/s [2024-11-26T23:47:44.024Z] 14809.25 IOPS, 57.85 MiB/s [2024-11-26T23:47:44.024Z] 14907.20 IOPS, 58.23 MiB/s 00:13:55.893 Latency(us) 00:13:55.893 [2024-11-26T23:47:44.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.893 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:55.893 xnvme_bdev : 5.01 14897.70 58.19 0.00 0.00 4288.67 76.80 23492.14 00:13:55.893 [2024-11-26T23:47:44.024Z] =================================================================================================================== 00:13:55.893 [2024-11-26T23:47:44.024Z] Total : 14897.70 58.19 0.00 0.00 4288.67 76.80 23492.14 00:13:55.893 00:13:55.893 real 0m11.368s 00:13:55.893 user 0m7.791s 00:13:55.893 sys 0m2.603s 00:13:55.893 23:47:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.893 ************************************ 00:13:55.893 END TEST xnvme_bdevperf 00:13:55.893 ************************************ 00:13:55.893 23:47:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.155 23:47:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:56.155 23:47:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.155 23:47:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.155 23:47:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.155 ************************************ 00:13:56.155 START TEST xnvme_fio_plugin 00:13:56.155 ************************************ 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:56.155 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:56.156 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:56.156 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:56.156 23:47:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:56.156 { 00:13:56.156 "subsystems": [ 00:13:56.156 { 00:13:56.156 "subsystem": "bdev", 00:13:56.156 "config": [ 00:13:56.156 { 00:13:56.156 "params": { 00:13:56.156 "io_mechanism": "io_uring", 00:13:56.156 "conserve_cpu": true, 00:13:56.156 "filename": "/dev/nvme0n1", 00:13:56.156 "name": "xnvme_bdev" 00:13:56.156 }, 00:13:56.156 "method": "bdev_xnvme_create" 00:13:56.156 }, 00:13:56.156 { 00:13:56.156 "method": "bdev_wait_for_examine" 00:13:56.156 } 00:13:56.156 ] 00:13:56.156 } 00:13:56.156 ] 00:13:56.156 } 00:13:56.156 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:56.156 fio-3.35 00:13:56.156 Starting 1 thread 00:14:02.766 00:14:02.766 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81959: Tue Nov 26 23:47:49 2024 00:14:02.766 read: IOPS=39.8k, BW=156MiB/s (163MB/s)(779MiB/5002msec) 00:14:02.766 slat (usec): min=2, max=119, avg= 3.28, stdev= 1.60 00:14:02.766 clat (usec): min=898, max=9818, avg=1474.41, stdev=286.30 00:14:02.766 lat (usec): min=901, max=9821, avg=1477.69, stdev=286.69 00:14:02.766 clat percentiles (usec): 00:14:02.766 | 1.00th=[ 1074], 5.00th=[ 1139], 10.00th=[ 1172], 20.00th=[ 1237], 00:14:02.766 | 30.00th=[ 1287], 40.00th=[ 1336], 50.00th=[ 1401], 60.00th=[ 1483], 00:14:02.766 | 70.00th=[ 1582], 80.00th=[ 1696], 90.00th=[ 1860], 95.00th=[ 2008], 00:14:02.766 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 3064], 99.95th=[ 3458], 00:14:02.766 | 99.99th=[ 5145] 00:14:02.766 bw ( KiB/s): min=139264, max=171520, per=100.00%, avg=160170.67, stdev=10880.75, samples=9 00:14:02.766 iops : min=34816, max=42880, avg=40042.67, stdev=2720.19, samples=9 00:14:02.766 lat (usec) : 1000=0.13% 00:14:02.766 lat (msec) : 2=94.81%, 4=5.04%, 10=0.03% 00:14:02.766 cpu : usr=69.99%, sys=26.27%, ctx=12, majf=0, minf=1063 00:14:02.766 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:02.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:02.766 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:02.766 issued rwts: total=199331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:02.766 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:02.766 00:14:02.766 Run status group 0 (all jobs): 00:14:02.766 READ: bw=156MiB/s (163MB/s), 156MiB/s-156MiB/s (163MB/s-163MB/s), io=779MiB (816MB), run=5002-5002msec 00:14:02.766 ----------------------------------------------------- 00:14:02.766 Suppressions used: 00:14:02.766 count bytes template 00:14:02.766 1 11 /usr/src/fio/parse.c 00:14:02.766 1 8 libtcmalloc_minimal.so 00:14:02.766 1 904 libcrypto.so 00:14:02.766 ----------------------------------------------------- 00:14:02.766 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:02.766 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:02.767 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:02.767 23:47:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.767 { 00:14:02.767 "subsystems": [ 00:14:02.767 { 00:14:02.767 "subsystem": "bdev", 00:14:02.767 "config": [ 00:14:02.767 { 00:14:02.767 "params": { 00:14:02.767 "io_mechanism": "io_uring", 00:14:02.767 "conserve_cpu": true, 00:14:02.767 "filename": "/dev/nvme0n1", 00:14:02.767 "name": "xnvme_bdev" 00:14:02.767 }, 00:14:02.767 "method": "bdev_xnvme_create" 00:14:02.767 }, 00:14:02.767 { 00:14:02.767 "method": "bdev_wait_for_examine" 00:14:02.767 } 00:14:02.767 ] 00:14:02.767 } 00:14:02.767 ] 00:14:02.767 } 00:14:02.767 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:02.767 fio-3.35 00:14:02.767 Starting 1 thread 00:14:08.061 00:14:08.061 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82035: Tue Nov 26 23:47:55 2024 00:14:08.061 write: IOPS=36.2k, BW=142MiB/s (148MB/s)(708MiB/5001msec); 0 zone resets 00:14:08.061 slat (usec): min=2, max=565, avg= 4.09, stdev= 2.99 00:14:08.061 clat (usec): min=967, max=5019, avg=1602.33, stdev=311.34 00:14:08.061 lat (usec): min=970, max=5022, avg=1606.42, stdev=311.83 00:14:08.061 clat percentiles (usec): 00:14:08.061 | 1.00th=[ 1074], 5.00th=[ 1156], 10.00th=[ 1237], 20.00th=[ 1336], 00:14:08.061 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1565], 60.00th=[ 1647], 00:14:08.061 | 70.00th=[ 1745], 80.00th=[ 1844], 90.00th=[ 1991], 95.00th=[ 2147], 00:14:08.061 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 3425], 99.95th=[ 3556], 00:14:08.061 | 99.99th=[ 3785] 00:14:08.061 bw ( KiB/s): min=134344, max=166752, per=100.00%, avg=145539.56, stdev=11447.42, samples=9 00:14:08.061 iops : min=33586, max=41688, avg=36384.89, stdev=2861.85, samples=9 00:14:08.061 lat (usec) : 1000=0.06% 00:14:08.061 lat (msec) : 2=90.07%, 4=9.86%, 10=0.01% 00:14:08.061 cpu : usr=54.86%, sys=40.12%, ctx=38, majf=0, minf=1063 00:14:08.061 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:08.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.061 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:08.061 issued rwts: total=0,181223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:08.061 00:14:08.061 Run status group 0 (all jobs): 00:14:08.061 WRITE: bw=142MiB/s (148MB/s), 142MiB/s-142MiB/s (148MB/s-148MB/s), io=708MiB (742MB), run=5001-5001msec 00:14:08.322 ----------------------------------------------------- 00:14:08.322 Suppressions used: 00:14:08.322 count bytes template 00:14:08.322 1 11 /usr/src/fio/parse.c 00:14:08.322 1 8 libtcmalloc_minimal.so 00:14:08.322 1 904 libcrypto.so 00:14:08.322 ----------------------------------------------------- 00:14:08.322 00:14:08.322 00:14:08.322 real 0m12.280s 00:14:08.322 user 0m7.537s 00:14:08.322 sys 0m3.977s 00:14:08.322 23:47:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.322 23:47:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:08.322 ************************************ 00:14:08.322 END TEST xnvme_fio_plugin 00:14:08.322 ************************************ 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:08.322 23:47:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:08.322 23:47:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.322 23:47:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.322 23:47:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.322 ************************************ 00:14:08.322 START TEST xnvme_rpc 00:14:08.322 ************************************ 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82120 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82120 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82120 ']' 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.322 23:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.583 [2024-11-26 23:47:56.511616] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:08.583 [2024-11-26 23:47:56.511784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82120 ] 00:14:08.583 [2024-11-26 23:47:56.660759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.583 [2024-11-26 23:47:56.701977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.529 xnvme_bdev 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82120 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82120 ']' 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82120 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82120 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.529 killing process with pid 82120 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82120' 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82120 00:14:09.529 23:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82120 00:14:10.144 00:14:10.144 real 0m1.639s 00:14:10.144 user 0m1.620s 00:14:10.144 sys 0m0.516s 00:14:10.144 23:47:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.144 ************************************ 00:14:10.144 END TEST xnvme_rpc 00:14:10.144 ************************************ 00:14:10.144 23:47:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.144 23:47:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:10.144 23:47:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.144 23:47:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.144 23:47:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.144 ************************************ 00:14:10.144 START TEST xnvme_bdevperf 00:14:10.144 ************************************ 00:14:10.144 23:47:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:10.144 23:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:10.144 23:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:10.144 23:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:10.144 23:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:10.144 23:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:10.144 23:47:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:10.144 23:47:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:10.144 { 00:14:10.144 "subsystems": [ 00:14:10.144 { 00:14:10.144 "subsystem": "bdev", 00:14:10.144 "config": [ 00:14:10.144 { 00:14:10.144 "params": { 00:14:10.144 "io_mechanism": "io_uring_cmd", 00:14:10.144 "conserve_cpu": false, 00:14:10.144 "filename": "/dev/ng0n1", 00:14:10.144 "name": "xnvme_bdev" 00:14:10.144 }, 00:14:10.144 "method": "bdev_xnvme_create" 00:14:10.144 }, 00:14:10.144 { 00:14:10.144 "method": "bdev_wait_for_examine" 00:14:10.144 } 00:14:10.144 ] 00:14:10.144 } 00:14:10.144 ] 00:14:10.144 } 00:14:10.144 [2024-11-26 23:47:58.198862] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:10.144 [2024-11-26 23:47:58.199014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82179 ] 00:14:10.406 [2024-11-26 23:47:58.347656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.406 [2024-11-26 23:47:58.388765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.406 Running I/O for 5 seconds... 00:14:12.761 42434.00 IOPS, 165.76 MiB/s [2024-11-26T23:48:01.836Z] 39932.50 IOPS, 155.99 MiB/s [2024-11-26T23:48:02.780Z] 38784.00 IOPS, 151.50 MiB/s [2024-11-26T23:48:03.726Z] 37686.25 IOPS, 147.21 MiB/s [2024-11-26T23:48:03.726Z] 36929.20 IOPS, 144.25 MiB/s 00:14:15.595 Latency(us) 00:14:15.595 [2024-11-26T23:48:03.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.595 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:15.595 xnvme_bdev : 5.00 36916.00 144.20 0.00 0.00 1729.64 359.19 10939.47 00:14:15.595 [2024-11-26T23:48:03.726Z] =================================================================================================================== 00:14:15.595 [2024-11-26T23:48:03.726Z] Total : 36916.00 144.20 0.00 0.00 1729.64 359.19 10939.47 00:14:15.856 23:48:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:15.856 23:48:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:15.856 23:48:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:15.856 23:48:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:15.856 23:48:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:15.856 { 00:14:15.856 "subsystems": [ 00:14:15.856 { 00:14:15.856 "subsystem": "bdev", 00:14:15.856 "config": [ 00:14:15.856 { 00:14:15.856 "params": { 00:14:15.856 "io_mechanism": "io_uring_cmd", 00:14:15.856 "conserve_cpu": false, 00:14:15.856 "filename": "/dev/ng0n1", 00:14:15.856 "name": "xnvme_bdev" 00:14:15.856 }, 00:14:15.856 "method": "bdev_xnvme_create" 00:14:15.856 }, 00:14:15.856 { 00:14:15.856 "method": "bdev_wait_for_examine" 00:14:15.856 } 00:14:15.856 ] 00:14:15.856 } 00:14:15.856 ] 00:14:15.856 } 00:14:15.856 [2024-11-26 23:48:03.875345] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:15.856 [2024-11-26 23:48:03.875669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82242 ] 00:14:16.118 [2024-11-26 23:48:04.024101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.118 [2024-11-26 23:48:04.064808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.118 Running I/O for 5 seconds... 00:14:18.448 20264.00 IOPS, 79.16 MiB/s [2024-11-26T23:48:07.524Z] 20874.50 IOPS, 81.54 MiB/s [2024-11-26T23:48:08.470Z] 21218.00 IOPS, 82.88 MiB/s [2024-11-26T23:48:09.414Z] 21192.25 IOPS, 82.78 MiB/s [2024-11-26T23:48:09.414Z] 21738.40 IOPS, 84.92 MiB/s 00:14:21.283 Latency(us) 00:14:21.283 [2024-11-26T23:48:09.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.283 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:21.283 xnvme_bdev : 5.01 21731.39 84.89 0.00 0.00 2939.07 72.86 21475.64 00:14:21.283 [2024-11-26T23:48:09.414Z] =================================================================================================================== 00:14:21.283 [2024-11-26T23:48:09.414Z] Total : 21731.39 84.89 0.00 0.00 2939.07 72.86 21475.64 00:14:21.543 23:48:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:21.543 23:48:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:21.543 23:48:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:21.543 23:48:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:21.543 23:48:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:21.544 { 00:14:21.544 "subsystems": [ 00:14:21.544 { 00:14:21.544 "subsystem": "bdev", 00:14:21.544 "config": [ 00:14:21.544 { 00:14:21.544 "params": { 00:14:21.544 "io_mechanism": "io_uring_cmd", 00:14:21.544 "conserve_cpu": false, 00:14:21.544 "filename": "/dev/ng0n1", 00:14:21.544 "name": "xnvme_bdev" 00:14:21.544 }, 00:14:21.544 "method": "bdev_xnvme_create" 00:14:21.544 }, 00:14:21.544 { 00:14:21.544 "method": "bdev_wait_for_examine" 00:14:21.544 } 00:14:21.544 ] 00:14:21.544 } 00:14:21.544 ] 00:14:21.544 } 00:14:21.544 [2024-11-26 23:48:09.549613] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:21.544 [2024-11-26 23:48:09.549766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82311 ] 00:14:21.805 [2024-11-26 23:48:09.696104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.805 [2024-11-26 23:48:09.737145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.805 Running I/O for 5 seconds... 00:14:23.767 78336.00 IOPS, 306.00 MiB/s [2024-11-26T23:48:13.285Z] 78400.00 IOPS, 306.25 MiB/s [2024-11-26T23:48:14.223Z] 78613.33 IOPS, 307.08 MiB/s [2024-11-26T23:48:15.167Z] 80704.00 IOPS, 315.25 MiB/s [2024-11-26T23:48:15.167Z] 81408.00 IOPS, 318.00 MiB/s 00:14:27.036 Latency(us) 00:14:27.036 [2024-11-26T23:48:15.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.036 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:27.036 xnvme_bdev : 5.00 81382.00 317.90 0.00 0.00 783.05 526.18 2470.20 00:14:27.036 [2024-11-26T23:48:15.167Z] =================================================================================================================== 00:14:27.036 [2024-11-26T23:48:15.167Z] Total : 81382.00 317.90 0.00 0.00 783.05 526.18 2470.20 00:14:27.036 23:48:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.036 23:48:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:27.036 23:48:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.036 23:48:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.036 23:48:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.297 { 00:14:27.297 "subsystems": [ 00:14:27.297 { 00:14:27.297 "subsystem": "bdev", 00:14:27.297 "config": [ 00:14:27.297 { 00:14:27.297 "params": { 00:14:27.297 "io_mechanism": "io_uring_cmd", 00:14:27.297 "conserve_cpu": false, 00:14:27.297 "filename": "/dev/ng0n1", 00:14:27.297 "name": "xnvme_bdev" 00:14:27.297 }, 00:14:27.297 "method": "bdev_xnvme_create" 00:14:27.297 }, 00:14:27.297 { 00:14:27.297 "method": "bdev_wait_for_examine" 00:14:27.297 } 00:14:27.297 ] 00:14:27.297 } 00:14:27.297 ] 00:14:27.297 } 00:14:27.297 [2024-11-26 23:48:15.207084] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:27.297 [2024-11-26 23:48:15.207240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82378 ] 00:14:27.297 [2024-11-26 23:48:15.354874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.297 [2024-11-26 23:48:15.382276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.555 Running I/O for 5 seconds... 00:14:29.431 42694.00 IOPS, 166.77 MiB/s [2024-11-26T23:48:18.502Z] 42169.50 IOPS, 164.72 MiB/s [2024-11-26T23:48:19.879Z] 41230.67 IOPS, 161.06 MiB/s [2024-11-26T23:48:20.820Z] 40441.75 IOPS, 157.98 MiB/s [2024-11-26T23:48:20.820Z] 39853.20 IOPS, 155.68 MiB/s 00:14:32.689 Latency(us) 00:14:32.689 [2024-11-26T23:48:20.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.689 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:32.689 xnvme_bdev : 5.00 39844.94 155.64 0.00 0.00 1602.38 244.18 22383.06 00:14:32.689 [2024-11-26T23:48:20.820Z] =================================================================================================================== 00:14:32.689 [2024-11-26T23:48:20.820Z] Total : 39844.94 155.64 0.00 0.00 1602.38 244.18 22383.06 00:14:32.689 ************************************ 00:14:32.689 END TEST xnvme_bdevperf 00:14:32.689 ************************************ 00:14:32.689 00:14:32.689 real 0m22.565s 00:14:32.689 user 0m10.624s 00:14:32.689 sys 0m11.452s 00:14:32.689 23:48:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.689 23:48:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.689 23:48:20 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:32.689 23:48:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.689 23:48:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.689 23:48:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.689 ************************************ 00:14:32.689 START TEST xnvme_fio_plugin 00:14:32.689 ************************************ 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:32.689 23:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:32.690 { 00:14:32.690 "subsystems": [ 00:14:32.690 { 00:14:32.690 "subsystem": "bdev", 00:14:32.690 "config": [ 00:14:32.690 { 00:14:32.690 "params": { 00:14:32.690 "io_mechanism": "io_uring_cmd", 00:14:32.690 "conserve_cpu": false, 00:14:32.690 "filename": "/dev/ng0n1", 00:14:32.690 "name": "xnvme_bdev" 00:14:32.690 }, 00:14:32.690 "method": "bdev_xnvme_create" 00:14:32.690 }, 00:14:32.690 { 00:14:32.690 "method": "bdev_wait_for_examine" 00:14:32.690 } 00:14:32.690 ] 00:14:32.690 } 00:14:32.690 ] 00:14:32.690 } 00:14:32.951 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:32.951 fio-3.35 00:14:32.951 Starting 1 thread 00:14:39.556 00:14:39.556 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82481: Tue Nov 26 23:48:26 2024 00:14:39.556 read: IOPS=34.5k, BW=135MiB/s (141MB/s)(673MiB/5002msec) 00:14:39.556 slat (nsec): min=2838, max=91752, avg=3754.38, stdev=2074.40 00:14:39.556 clat (usec): min=893, max=3545, avg=1704.00, stdev=296.09 00:14:39.556 lat (usec): min=896, max=3578, avg=1707.75, stdev=296.45 00:14:39.556 clat percentiles (usec): 00:14:39.556 | 1.00th=[ 1205], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1450], 00:14:39.556 | 30.00th=[ 1516], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1745], 00:14:39.556 | 70.00th=[ 1827], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2245], 00:14:39.556 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3130], 99.95th=[ 3228], 00:14:39.556 | 99.99th=[ 3425] 00:14:39.556 bw ( KiB/s): min=132608, max=141029, per=100.00%, avg=138037.89, stdev=2752.86, samples=9 00:14:39.556 iops : min=33152, max=35257, avg=34509.44, stdev=688.18, samples=9 00:14:39.556 lat (usec) : 1000=0.02% 00:14:39.556 lat (msec) : 2=84.80%, 4=15.18% 00:14:39.556 cpu : usr=34.35%, sys=64.31%, ctx=13, majf=0, minf=1063 00:14:39.556 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:39.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.556 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:39.556 issued rwts: total=172352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:39.556 00:14:39.556 Run status group 0 (all jobs): 00:14:39.556 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=673MiB (706MB), run=5002-5002msec 00:14:39.556 ----------------------------------------------------- 00:14:39.556 Suppressions used: 00:14:39.556 count bytes template 00:14:39.556 1 11 /usr/src/fio/parse.c 00:14:39.556 1 8 libtcmalloc_minimal.so 00:14:39.556 1 904 libcrypto.so 00:14:39.556 ----------------------------------------------------- 00:14:39.556 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:39.556 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:39.557 23:48:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.557 { 00:14:39.557 "subsystems": [ 00:14:39.557 { 00:14:39.557 "subsystem": "bdev", 00:14:39.557 "config": [ 00:14:39.557 { 00:14:39.557 "params": { 00:14:39.557 "io_mechanism": "io_uring_cmd", 00:14:39.557 "conserve_cpu": false, 00:14:39.557 "filename": "/dev/ng0n1", 00:14:39.557 "name": "xnvme_bdev" 00:14:39.557 }, 00:14:39.557 "method": "bdev_xnvme_create" 00:14:39.557 }, 00:14:39.557 { 00:14:39.557 "method": "bdev_wait_for_examine" 00:14:39.557 } 00:14:39.557 ] 00:14:39.557 } 00:14:39.557 ] 00:14:39.557 } 00:14:39.557 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:39.557 fio-3.35 00:14:39.557 Starting 1 thread 00:14:44.915 00:14:44.915 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82570: Tue Nov 26 23:48:32 2024 00:14:44.915 write: IOPS=35.6k, BW=139MiB/s (146MB/s)(695MiB/5001msec); 0 zone resets 00:14:44.915 slat (usec): min=2, max=290, avg= 4.13, stdev= 2.44 00:14:44.915 clat (usec): min=144, max=5495, avg=1632.50, stdev=298.27 00:14:44.915 lat (usec): min=148, max=5498, avg=1636.63, stdev=298.75 00:14:44.915 clat percentiles (usec): 00:14:44.915 | 1.00th=[ 1029], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1401], 00:14:44.915 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1680], 00:14:44.915 | 70.00th=[ 1762], 80.00th=[ 1844], 90.00th=[ 1991], 95.00th=[ 2114], 00:14:44.915 | 99.00th=[ 2474], 99.50th=[ 2671], 99.90th=[ 3359], 99.95th=[ 3621], 00:14:44.915 | 99.99th=[ 4424] 00:14:44.915 bw ( KiB/s): min=139824, max=145064, per=99.62%, avg=141776.89, stdev=1642.92, samples=9 00:14:44.915 iops : min=34956, max=36266, avg=35444.22, stdev=410.73, samples=9 00:14:44.915 lat (usec) : 250=0.01%, 500=0.04%, 750=0.34%, 1000=0.47% 00:14:44.915 lat (msec) : 2=90.01%, 4=9.12%, 10=0.03% 00:14:44.915 cpu : usr=35.52%, sys=63.04%, ctx=11, majf=0, minf=1063 00:14:44.915 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.1%, 16=24.4%, 32=51.4%, >=64=1.7% 00:14:44.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.915 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:44.915 issued rwts: total=0,177930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.915 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:44.915 00:14:44.915 Run status group 0 (all jobs): 00:14:44.915 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=695MiB (729MB), run=5001-5001msec 00:14:44.915 ----------------------------------------------------- 00:14:44.915 Suppressions used: 00:14:44.915 count bytes template 00:14:44.915 1 11 /usr/src/fio/parse.c 00:14:44.915 1 8 libtcmalloc_minimal.so 00:14:44.915 1 904 libcrypto.so 00:14:44.915 ----------------------------------------------------- 00:14:44.915 00:14:44.915 00:14:44.915 real 0m12.239s 00:14:44.915 user 0m4.812s 00:14:44.915 sys 0m6.966s 00:14:44.915 ************************************ 00:14:44.915 END TEST xnvme_fio_plugin 00:14:44.915 ************************************ 00:14:44.915 23:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.915 23:48:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:45.176 23:48:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:45.176 23:48:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:45.176 23:48:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:45.176 23:48:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:45.176 23:48:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:45.176 23:48:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.176 23:48:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.176 ************************************ 00:14:45.176 START TEST xnvme_rpc 00:14:45.176 ************************************ 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82651 00:14:45.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82651 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82651 ']' 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.176 23:48:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.177 23:48:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.177 23:48:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.177 [2024-11-26 23:48:33.162739] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:45.177 [2024-11-26 23:48:33.163185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82651 ] 00:14:45.437 [2024-11-26 23:48:33.311304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.437 [2024-11-26 23:48:33.352810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.008 xnvme_bdev 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.008 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82651 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82651 ']' 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82651 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82651 00:14:46.269 killing process with pid 82651 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82651' 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82651 00:14:46.269 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82651 00:14:46.842 00:14:46.842 real 0m1.628s 00:14:46.842 user 0m1.591s 00:14:46.842 sys 0m0.518s 00:14:46.842 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.842 23:48:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.842 ************************************ 00:14:46.842 END TEST xnvme_rpc 00:14:46.842 ************************************ 00:14:46.842 23:48:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:46.842 23:48:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:46.842 23:48:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.842 23:48:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.842 ************************************ 00:14:46.842 START TEST xnvme_bdevperf 00:14:46.842 ************************************ 00:14:46.842 23:48:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:46.842 23:48:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:46.842 23:48:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:46.842 23:48:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:46.842 23:48:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:46.842 23:48:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:46.842 23:48:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:46.842 23:48:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:46.842 { 00:14:46.842 "subsystems": [ 00:14:46.842 { 00:14:46.842 "subsystem": "bdev", 00:14:46.842 "config": [ 00:14:46.842 { 00:14:46.842 "params": { 00:14:46.842 "io_mechanism": "io_uring_cmd", 00:14:46.842 "conserve_cpu": true, 00:14:46.842 "filename": "/dev/ng0n1", 00:14:46.842 "name": "xnvme_bdev" 00:14:46.842 }, 00:14:46.842 "method": "bdev_xnvme_create" 00:14:46.842 }, 00:14:46.842 { 00:14:46.842 "method": "bdev_wait_for_examine" 00:14:46.842 } 00:14:46.842 ] 00:14:46.842 } 00:14:46.842 ] 00:14:46.842 } 00:14:46.842 [2024-11-26 23:48:34.840939] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:46.842 [2024-11-26 23:48:34.841079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82703 ] 00:14:47.104 [2024-11-26 23:48:34.991268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.104 [2024-11-26 23:48:35.032006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.104 Running I/O for 5 seconds... 00:14:49.059 33344.00 IOPS, 130.25 MiB/s [2024-11-26T23:48:38.577Z] 34528.00 IOPS, 134.88 MiB/s [2024-11-26T23:48:39.520Z] 35264.00 IOPS, 137.75 MiB/s [2024-11-26T23:48:40.464Z] 36775.75 IOPS, 143.66 MiB/s 00:14:52.333 Latency(us) 00:14:52.333 [2024-11-26T23:48:40.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.333 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:52.333 xnvme_bdev : 5.00 38141.83 148.99 0.00 0.00 1673.98 869.61 7057.72 00:14:52.333 [2024-11-26T23:48:40.464Z] =================================================================================================================== 00:14:52.333 [2024-11-26T23:48:40.464Z] Total : 38141.83 148.99 0.00 0.00 1673.98 869.61 7057.72 00:14:52.333 23:48:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:52.333 23:48:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:52.333 23:48:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:52.333 23:48:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:52.333 23:48:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:52.595 { 00:14:52.595 "subsystems": [ 00:14:52.595 { 00:14:52.595 "subsystem": "bdev", 00:14:52.595 "config": [ 00:14:52.595 { 00:14:52.595 "params": { 00:14:52.595 "io_mechanism": "io_uring_cmd", 00:14:52.595 "conserve_cpu": true, 00:14:52.595 "filename": "/dev/ng0n1", 00:14:52.595 "name": "xnvme_bdev" 00:14:52.595 }, 00:14:52.595 "method": "bdev_xnvme_create" 00:14:52.595 }, 00:14:52.595 { 00:14:52.595 "method": "bdev_wait_for_examine" 00:14:52.595 } 00:14:52.595 ] 00:14:52.595 } 00:14:52.595 ] 00:14:52.595 } 00:14:52.595 [2024-11-26 23:48:40.524187] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:52.596 [2024-11-26 23:48:40.524578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82772 ] 00:14:52.596 [2024-11-26 23:48:40.674048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.596 [2024-11-26 23:48:40.715291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.857 Running I/O for 5 seconds... 00:14:54.740 41623.00 IOPS, 162.59 MiB/s [2024-11-26T23:48:44.256Z] 39825.50 IOPS, 155.57 MiB/s [2024-11-26T23:48:45.199Z] 38883.67 IOPS, 151.89 MiB/s [2024-11-26T23:48:46.155Z] 39481.75 IOPS, 154.23 MiB/s [2024-11-26T23:48:46.155Z] 39360.60 IOPS, 153.75 MiB/s 00:14:58.024 Latency(us) 00:14:58.024 [2024-11-26T23:48:46.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.024 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:58.024 xnvme_bdev : 5.00 39354.17 153.73 0.00 0.00 1621.65 589.19 8469.27 00:14:58.024 [2024-11-26T23:48:46.155Z] =================================================================================================================== 00:14:58.024 [2024-11-26T23:48:46.155Z] Total : 39354.17 153.73 0.00 0.00 1621.65 589.19 8469.27 00:14:58.024 23:48:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:58.024 23:48:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:58.024 23:48:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:58.024 23:48:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:58.024 23:48:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:58.286 { 00:14:58.286 "subsystems": [ 00:14:58.286 { 00:14:58.286 "subsystem": "bdev", 00:14:58.286 "config": [ 00:14:58.286 { 00:14:58.286 "params": { 00:14:58.286 "io_mechanism": "io_uring_cmd", 00:14:58.286 "conserve_cpu": true, 00:14:58.286 "filename": "/dev/ng0n1", 00:14:58.286 "name": "xnvme_bdev" 00:14:58.286 }, 00:14:58.286 "method": "bdev_xnvme_create" 00:14:58.286 }, 00:14:58.286 { 00:14:58.286 "method": "bdev_wait_for_examine" 00:14:58.286 } 00:14:58.286 ] 00:14:58.286 } 00:14:58.286 ] 00:14:58.286 } 00:14:58.286 [2024-11-26 23:48:46.188109] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:14:58.286 [2024-11-26 23:48:46.188470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82835 ] 00:14:58.286 [2024-11-26 23:48:46.336591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.286 [2024-11-26 23:48:46.377026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.548 Running I/O for 5 seconds... 00:15:00.448 72256.00 IOPS, 282.25 MiB/s [2024-11-26T23:48:49.964Z] 74144.00 IOPS, 289.62 MiB/s [2024-11-26T23:48:50.535Z] 75840.00 IOPS, 296.25 MiB/s [2024-11-26T23:48:51.909Z] 76080.00 IOPS, 297.19 MiB/s [2024-11-26T23:48:51.909Z] 80102.40 IOPS, 312.90 MiB/s 00:15:03.778 Latency(us) 00:15:03.778 [2024-11-26T23:48:51.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.778 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:03.778 xnvme_bdev : 5.00 80061.37 312.74 0.00 0.00 795.80 332.41 3352.42 00:15:03.778 [2024-11-26T23:48:51.909Z] =================================================================================================================== 00:15:03.778 [2024-11-26T23:48:51.909Z] Total : 80061.37 312.74 0.00 0.00 795.80 332.41 3352.42 00:15:03.778 23:48:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:03.778 23:48:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:03.778 23:48:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:03.778 23:48:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:03.778 23:48:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:03.778 { 00:15:03.778 "subsystems": [ 00:15:03.778 { 00:15:03.778 "subsystem": "bdev", 00:15:03.778 "config": [ 00:15:03.778 { 00:15:03.778 "params": { 00:15:03.778 "io_mechanism": "io_uring_cmd", 00:15:03.778 "conserve_cpu": true, 00:15:03.778 "filename": "/dev/ng0n1", 00:15:03.778 "name": "xnvme_bdev" 00:15:03.778 }, 00:15:03.778 "method": "bdev_xnvme_create" 00:15:03.778 }, 00:15:03.778 { 00:15:03.778 "method": "bdev_wait_for_examine" 00:15:03.778 } 00:15:03.778 ] 00:15:03.778 } 00:15:03.778 ] 00:15:03.778 } 00:15:03.778 [2024-11-26 23:48:51.752079] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:15:03.778 [2024-11-26 23:48:51.752327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82908 ] 00:15:03.778 [2024-11-26 23:48:51.895205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.038 [2024-11-26 23:48:51.927573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.038 Running I/O for 5 seconds... 00:15:05.912 50209.00 IOPS, 196.13 MiB/s [2024-11-26T23:48:55.427Z] 46099.50 IOPS, 180.08 MiB/s [2024-11-26T23:48:56.370Z] 43606.67 IOPS, 170.34 MiB/s [2024-11-26T23:48:57.330Z] 42535.00 IOPS, 166.15 MiB/s [2024-11-26T23:48:57.330Z] 40850.40 IOPS, 159.57 MiB/s 00:15:09.199 Latency(us) 00:15:09.199 [2024-11-26T23:48:57.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.199 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:09.199 xnvme_bdev : 5.01 40791.78 159.34 0.00 0.00 1563.13 84.68 22080.59 00:15:09.199 [2024-11-26T23:48:57.330Z] =================================================================================================================== 00:15:09.199 [2024-11-26T23:48:57.330Z] Total : 40791.78 159.34 0.00 0.00 1563.13 84.68 22080.59 00:15:09.199 00:15:09.199 real 0m22.444s 00:15:09.199 user 0m14.011s 00:15:09.199 sys 0m6.274s 00:15:09.199 23:48:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.199 ************************************ 00:15:09.199 END TEST xnvme_bdevperf 00:15:09.199 ************************************ 00:15:09.199 23:48:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:09.199 23:48:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:09.199 23:48:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:09.199 23:48:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.199 23:48:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.199 ************************************ 00:15:09.199 START TEST xnvme_fio_plugin 00:15:09.199 ************************************ 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:09.199 23:48:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.199 { 00:15:09.199 "subsystems": [ 00:15:09.199 { 00:15:09.199 "subsystem": "bdev", 00:15:09.199 "config": [ 00:15:09.199 { 00:15:09.199 "params": { 00:15:09.199 "io_mechanism": "io_uring_cmd", 00:15:09.199 "conserve_cpu": true, 00:15:09.199 "filename": "/dev/ng0n1", 00:15:09.199 "name": "xnvme_bdev" 00:15:09.199 }, 00:15:09.199 "method": "bdev_xnvme_create" 00:15:09.199 }, 00:15:09.199 { 00:15:09.199 "method": "bdev_wait_for_examine" 00:15:09.199 } 00:15:09.199 ] 00:15:09.199 } 00:15:09.199 ] 00:15:09.199 } 00:15:09.459 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:09.459 fio-3.35 00:15:09.459 Starting 1 thread 00:15:14.805 00:15:14.805 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83011: Tue Nov 26 23:49:02 2024 00:15:14.805 read: IOPS=37.5k, BW=147MiB/s (154MB/s)(733MiB/5001msec) 00:15:14.805 slat (nsec): min=2830, max=68438, avg=3582.06, stdev=1949.27 00:15:14.805 clat (usec): min=722, max=3209, avg=1561.78, stdev=302.32 00:15:14.805 lat (usec): min=726, max=3234, avg=1565.36, stdev=302.83 00:15:14.805 clat percentiles (usec): 00:15:14.805 | 1.00th=[ 1037], 5.00th=[ 1123], 10.00th=[ 1188], 20.00th=[ 1287], 00:15:14.805 | 30.00th=[ 1385], 40.00th=[ 1467], 50.00th=[ 1532], 60.00th=[ 1614], 00:15:14.805 | 70.00th=[ 1696], 80.00th=[ 1811], 90.00th=[ 1958], 95.00th=[ 2114], 00:15:14.805 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 2835], 99.95th=[ 2933], 00:15:14.805 | 99.99th=[ 3097] 00:15:14.805 bw ( KiB/s): min=140288, max=187904, per=100.00%, avg=151210.67, stdev=15090.98, samples=9 00:15:14.805 iops : min=35072, max=46976, avg=37802.67, stdev=3772.74, samples=9 00:15:14.805 lat (usec) : 750=0.01%, 1000=0.33% 00:15:14.805 lat (msec) : 2=91.39%, 4=8.28% 00:15:14.805 cpu : usr=60.74%, sys=36.12%, ctx=8, majf=0, minf=1063 00:15:14.805 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:14.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.805 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:14.805 issued rwts: total=187560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:14.805 00:15:14.805 Run status group 0 (all jobs): 00:15:14.805 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=733MiB (768MB), run=5001-5001msec 00:15:15.393 ----------------------------------------------------- 00:15:15.393 Suppressions used: 00:15:15.393 count bytes template 00:15:15.393 1 11 /usr/src/fio/parse.c 00:15:15.393 1 8 libtcmalloc_minimal.so 00:15:15.393 1 904 libcrypto.so 00:15:15.393 ----------------------------------------------------- 00:15:15.393 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:15.393 23:49:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.393 { 00:15:15.393 "subsystems": [ 00:15:15.393 { 00:15:15.393 "subsystem": "bdev", 00:15:15.393 "config": [ 00:15:15.393 { 00:15:15.393 "params": { 00:15:15.393 "io_mechanism": "io_uring_cmd", 00:15:15.393 "conserve_cpu": true, 00:15:15.393 "filename": "/dev/ng0n1", 00:15:15.393 "name": "xnvme_bdev" 00:15:15.393 }, 00:15:15.393 "method": "bdev_xnvme_create" 00:15:15.393 }, 00:15:15.393 { 00:15:15.393 "method": "bdev_wait_for_examine" 00:15:15.393 } 00:15:15.393 ] 00:15:15.393 } 00:15:15.393 ] 00:15:15.393 } 00:15:15.654 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:15.654 fio-3.35 00:15:15.654 Starting 1 thread 00:15:20.951 00:15:20.951 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83096: Tue Nov 26 23:49:09 2024 00:15:20.951 write: IOPS=36.2k, BW=141MiB/s (148MB/s)(708MiB/5002msec); 0 zone resets 00:15:20.951 slat (usec): min=2, max=132, avg= 4.16, stdev= 2.31 00:15:20.951 clat (usec): min=719, max=5769, avg=1599.47, stdev=269.61 00:15:20.951 lat (usec): min=724, max=5773, avg=1603.63, stdev=270.05 00:15:20.951 clat percentiles (usec): 00:15:20.951 | 1.00th=[ 1106], 5.00th=[ 1221], 10.00th=[ 1287], 20.00th=[ 1385], 00:15:20.951 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1582], 60.00th=[ 1647], 00:15:20.951 | 70.00th=[ 1713], 80.00th=[ 1795], 90.00th=[ 1926], 95.00th=[ 2057], 00:15:20.951 | 99.00th=[ 2343], 99.50th=[ 2540], 99.90th=[ 3163], 99.95th=[ 3589], 00:15:20.951 | 99.99th=[ 4359] 00:15:20.951 bw ( KiB/s): min=134560, max=149792, per=100.00%, avg=144913.78, stdev=5418.53, samples=9 00:15:20.951 iops : min=33640, max=37448, avg=36228.44, stdev=1354.63, samples=9 00:15:20.951 lat (usec) : 750=0.01%, 1000=0.19% 00:15:20.951 lat (msec) : 2=92.92%, 4=6.87%, 10=0.02% 00:15:20.951 cpu : usr=52.47%, sys=43.33%, ctx=11, majf=0, minf=1063 00:15:20.951 IO depths : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:20.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.951 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:20.951 issued rwts: total=0,181158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:20.951 00:15:20.951 Run status group 0 (all jobs): 00:15:20.951 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=708MiB (742MB), run=5002-5002msec 00:15:21.522 ----------------------------------------------------- 00:15:21.522 Suppressions used: 00:15:21.522 count bytes template 00:15:21.522 1 11 /usr/src/fio/parse.c 00:15:21.522 1 8 libtcmalloc_minimal.so 00:15:21.522 1 904 libcrypto.so 00:15:21.522 ----------------------------------------------------- 00:15:21.522 00:15:21.522 00:15:21.522 real 0m12.219s 00:15:21.522 user 0m6.935s 00:15:21.522 sys 0m4.598s 00:15:21.522 23:49:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.522 ************************************ 00:15:21.522 END TEST xnvme_fio_plugin 00:15:21.522 ************************************ 00:15:21.522 23:49:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:21.522 23:49:09 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82651 00:15:21.522 23:49:09 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82651 ']' 00:15:21.522 23:49:09 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82651 00:15:21.522 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82651) - No such process 00:15:21.522 Process with pid 82651 is not found 00:15:21.522 23:49:09 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82651 is not found' 00:15:21.522 00:15:21.522 real 3m2.126s 00:15:21.522 user 1m32.993s 00:15:21.522 sys 1m14.085s 00:15:21.522 23:49:09 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.522 ************************************ 00:15:21.522 END TEST nvme_xnvme 00:15:21.522 ************************************ 00:15:21.522 23:49:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.522 23:49:09 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:21.522 23:49:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.522 23:49:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.522 23:49:09 -- common/autotest_common.sh@10 -- # set +x 00:15:21.522 ************************************ 00:15:21.522 START TEST blockdev_xnvme 00:15:21.522 ************************************ 00:15:21.522 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:21.783 * Looking for test storage... 00:15:21.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.783 23:49:09 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:21.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.783 --rc genhtml_branch_coverage=1 00:15:21.783 --rc genhtml_function_coverage=1 00:15:21.783 --rc genhtml_legend=1 00:15:21.783 --rc geninfo_all_blocks=1 00:15:21.783 --rc geninfo_unexecuted_blocks=1 00:15:21.783 00:15:21.783 ' 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:21.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.783 --rc genhtml_branch_coverage=1 00:15:21.783 --rc genhtml_function_coverage=1 00:15:21.783 --rc genhtml_legend=1 00:15:21.783 --rc geninfo_all_blocks=1 00:15:21.783 --rc geninfo_unexecuted_blocks=1 00:15:21.783 00:15:21.783 ' 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:21.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.783 --rc genhtml_branch_coverage=1 00:15:21.783 --rc genhtml_function_coverage=1 00:15:21.783 --rc genhtml_legend=1 00:15:21.783 --rc geninfo_all_blocks=1 00:15:21.783 --rc geninfo_unexecuted_blocks=1 00:15:21.783 00:15:21.783 ' 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:21.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.783 --rc genhtml_branch_coverage=1 00:15:21.783 --rc genhtml_function_coverage=1 00:15:21.783 --rc genhtml_legend=1 00:15:21.783 --rc geninfo_all_blocks=1 00:15:21.783 --rc geninfo_unexecuted_blocks=1 00:15:21.783 00:15:21.783 ' 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83219 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83219 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83219 ']' 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.783 23:49:09 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.783 23:49:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.783 [2024-11-26 23:49:09.875712] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:15:21.783 [2024-11-26 23:49:09.875902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83219 ] 00:15:22.044 [2024-11-26 23:49:10.022478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.044 [2024-11-26 23:49:10.063678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.617 23:49:10 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.617 23:49:10 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:22.617 23:49:10 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:22.617 23:49:10 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:22.617 23:49:10 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:22.617 23:49:10 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:22.617 23:49:10 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:23.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:23.764 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:23.764 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:23.764 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:23.764 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:23.764 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:23.764 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.765 23:49:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.765 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:23.765 nvme0n1 00:15:23.765 nvme0n2 00:15:23.765 nvme0n3 00:15:23.765 nvme1n1 00:15:23.765 nvme2n1 00:15:24.027 nvme3n1 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.027 23:49:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.027 23:49:11 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:24.027 23:49:12 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:24.028 23:49:12 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1947d359-6311-44af-af51-a2aafe97ee81"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1947d359-6311-44af-af51-a2aafe97ee81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ffd906bc-e295-425d-89cf-4d2aa240ffe5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffd906bc-e295-425d-89cf-4d2aa240ffe5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "3d686344-4531-4ae5-947a-08be2e7c45d7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d686344-4531-4ae5-947a-08be2e7c45d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dd0e50a7-332b-4985-89fa-0f15e7cca0f4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dd0e50a7-332b-4985-89fa-0f15e7cca0f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "3f0bfdb2-9e79-469e-badf-ec514c0e30f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3f0bfdb2-9e79-469e-badf-ec514c0e30f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "053b1e88-c0dc-4892-b0f5-676d1a0538eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "053b1e88-c0dc-4892-b0f5-676d1a0538eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:24.028 23:49:12 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:24.028 23:49:12 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:24.028 23:49:12 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:24.028 23:49:12 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83219 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83219 ']' 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83219 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83219 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.028 killing process with pid 83219 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83219' 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83219 00:15:24.028 23:49:12 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83219 00:15:24.601 23:49:12 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:24.601 23:49:12 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:24.601 23:49:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:24.601 23:49:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.601 23:49:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.601 ************************************ 00:15:24.601 START TEST bdev_hello_world 00:15:24.601 ************************************ 00:15:24.601 23:49:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:24.601 [2024-11-26 23:49:12.667437] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:15:24.601 [2024-11-26 23:49:12.667598] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83487 ] 00:15:24.862 [2024-11-26 23:49:12.815810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.862 [2024-11-26 23:49:12.857415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.123 [2024-11-26 23:49:13.123397] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:25.123 [2024-11-26 23:49:13.123480] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:25.123 [2024-11-26 23:49:13.123507] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:25.123 [2024-11-26 23:49:13.125930] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:25.123 [2024-11-26 23:49:13.126586] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:25.123 [2024-11-26 23:49:13.126623] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:25.123 [2024-11-26 23:49:13.126973] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:25.123 00:15:25.123 [2024-11-26 23:49:13.127001] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:25.383 00:15:25.383 real 0m0.790s 00:15:25.383 user 0m0.406s 00:15:25.383 sys 0m0.239s 00:15:25.383 ************************************ 00:15:25.383 END TEST bdev_hello_world 00:15:25.383 ************************************ 00:15:25.383 23:49:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.383 23:49:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:25.383 23:49:13 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:25.383 23:49:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:25.383 23:49:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.383 23:49:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.383 ************************************ 00:15:25.383 START TEST bdev_bounds 00:15:25.383 ************************************ 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83521 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:25.383 Process bdevio pid: 83521 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83521' 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83521 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83521 ']' 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:25.383 23:49:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:25.644 [2024-11-26 23:49:13.532138] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:15:25.644 [2024-11-26 23:49:13.532309] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83521 ] 00:15:25.644 [2024-11-26 23:49:13.681474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.644 [2024-11-26 23:49:13.726893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.644 [2024-11-26 23:49:13.727144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.644 [2024-11-26 23:49:13.727180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.588 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.588 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:26.588 23:49:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:26.588 I/O targets: 00:15:26.588 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:26.588 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:26.588 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:26.588 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:26.588 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:26.588 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:26.588 00:15:26.588 00:15:26.588 CUnit - A unit testing framework for C - Version 2.1-3 00:15:26.588 http://cunit.sourceforge.net/ 00:15:26.588 00:15:26.588 00:15:26.588 Suite: bdevio tests on: nvme3n1 00:15:26.588 Test: blockdev write read block ...passed 00:15:26.588 Test: blockdev write zeroes read block ...passed 00:15:26.588 Test: blockdev write zeroes read no split ...passed 00:15:26.588 Test: blockdev write zeroes read split ...passed 00:15:26.588 Test: blockdev write zeroes read split partial ...passed 00:15:26.588 Test: blockdev reset ...passed 00:15:26.588 Test: blockdev write read 8 blocks ...passed 00:15:26.588 Test: blockdev write read size > 128k ...passed 00:15:26.588 Test: blockdev write read invalid size ...passed 00:15:26.588 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:26.588 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:26.588 Test: blockdev write read max offset ...passed 00:15:26.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:26.588 Test: blockdev writev readv 8 blocks ...passed 00:15:26.588 Test: blockdev writev readv 30 x 1block ...passed 00:15:26.588 Test: blockdev writev readv block ...passed 00:15:26.588 Test: blockdev writev readv size > 128k ...passed 00:15:26.588 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:26.588 Test: blockdev comparev and writev ...passed 00:15:26.588 Test: blockdev nvme passthru rw ...passed 00:15:26.588 Test: blockdev nvme passthru vendor specific ...passed 00:15:26.588 Test: blockdev nvme admin passthru ...passed 00:15:26.588 Test: blockdev copy ...passed 00:15:26.588 Suite: bdevio tests on: nvme2n1 00:15:26.588 Test: blockdev write read block ...passed 00:15:26.588 Test: blockdev write zeroes read block ...passed 00:15:26.588 Test: blockdev write zeroes read no split ...passed 00:15:26.588 Test: blockdev write zeroes read split ...passed 00:15:26.588 Test: blockdev write zeroes read split partial ...passed 00:15:26.588 Test: blockdev reset ...passed 00:15:26.588 Test: blockdev write read 8 blocks ...passed 00:15:26.588 Test: blockdev write read size > 128k ...passed 00:15:26.588 Test: blockdev write read invalid size ...passed 00:15:26.588 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:26.588 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:26.588 Test: blockdev write read max offset ...passed 00:15:26.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:26.588 Test: blockdev writev readv 8 blocks ...passed 00:15:26.588 Test: blockdev writev readv 30 x 1block ...passed 00:15:26.588 Test: blockdev writev readv block ...passed 00:15:26.588 Test: blockdev writev readv size > 128k ...passed 00:15:26.588 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:26.588 Test: blockdev comparev and writev ...passed 00:15:26.588 Test: blockdev nvme passthru rw ...passed 00:15:26.588 Test: blockdev nvme passthru vendor specific ...passed 00:15:26.588 Test: blockdev nvme admin passthru ...passed 00:15:26.588 Test: blockdev copy ...passed 00:15:26.588 Suite: bdevio tests on: nvme1n1 00:15:26.588 Test: blockdev write read block ...passed 00:15:26.588 Test: blockdev write zeroes read block ...passed 00:15:26.588 Test: blockdev write zeroes read no split ...passed 00:15:26.588 Test: blockdev write zeroes read split ...passed 00:15:26.588 Test: blockdev write zeroes read split partial ...passed 00:15:26.588 Test: blockdev reset ...passed 00:15:26.588 Test: blockdev write read 8 blocks ...passed 00:15:26.588 Test: blockdev write read size > 128k ...passed 00:15:26.588 Test: blockdev write read invalid size ...passed 00:15:26.588 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:26.588 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:26.588 Test: blockdev write read max offset ...passed 00:15:26.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:26.588 Test: blockdev writev readv 8 blocks ...passed 00:15:26.588 Test: blockdev writev readv 30 x 1block ...passed 00:15:26.588 Test: blockdev writev readv block ...passed 00:15:26.588 Test: blockdev writev readv size > 128k ...passed 00:15:26.588 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:26.588 Test: blockdev comparev and writev ...passed 00:15:26.588 Test: blockdev nvme passthru rw ...passed 00:15:26.588 Test: blockdev nvme passthru vendor specific ...passed 00:15:26.588 Test: blockdev nvme admin passthru ...passed 00:15:26.588 Test: blockdev copy ...passed 00:15:26.588 Suite: bdevio tests on: nvme0n3 00:15:26.588 Test: blockdev write read block ...passed 00:15:26.588 Test: blockdev write zeroes read block ...passed 00:15:26.588 Test: blockdev write zeroes read no split ...passed 00:15:26.588 Test: blockdev write zeroes read split ...passed 00:15:26.588 Test: blockdev write zeroes read split partial ...passed 00:15:26.588 Test: blockdev reset ...passed 00:15:26.588 Test: blockdev write read 8 blocks ...passed 00:15:26.588 Test: blockdev write read size > 128k ...passed 00:15:26.588 Test: blockdev write read invalid size ...passed 00:15:26.588 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:26.588 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:26.588 Test: blockdev write read max offset ...passed 00:15:26.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:26.588 Test: blockdev writev readv 8 blocks ...passed 00:15:26.588 Test: blockdev writev readv 30 x 1block ...passed 00:15:26.588 Test: blockdev writev readv block ...passed 00:15:26.588 Test: blockdev writev readv size > 128k ...passed 00:15:26.588 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:26.588 Test: blockdev comparev and writev ...passed 00:15:26.588 Test: blockdev nvme passthru rw ...passed 00:15:26.588 Test: blockdev nvme passthru vendor specific ...passed 00:15:26.588 Test: blockdev nvme admin passthru ...passed 00:15:26.588 Test: blockdev copy ...passed 00:15:26.588 Suite: bdevio tests on: nvme0n2 00:15:26.588 Test: blockdev write read block ...passed 00:15:26.588 Test: blockdev write zeroes read block ...passed 00:15:26.588 Test: blockdev write zeroes read no split ...passed 00:15:26.588 Test: blockdev write zeroes read split ...passed 00:15:26.588 Test: blockdev write zeroes read split partial ...passed 00:15:26.588 Test: blockdev reset ...passed 00:15:26.588 Test: blockdev write read 8 blocks ...passed 00:15:26.588 Test: blockdev write read size > 128k ...passed 00:15:26.588 Test: blockdev write read invalid size ...passed 00:15:26.588 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:26.588 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:26.588 Test: blockdev write read max offset ...passed 00:15:26.588 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:26.588 Test: blockdev writev readv 8 blocks ...passed 00:15:26.588 Test: blockdev writev readv 30 x 1block ...passed 00:15:26.588 Test: blockdev writev readv block ...passed 00:15:26.588 Test: blockdev writev readv size > 128k ...passed 00:15:26.588 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:26.849 Test: blockdev comparev and writev ...passed 00:15:26.849 Test: blockdev nvme passthru rw ...passed 00:15:26.849 Test: blockdev nvme passthru vendor specific ...passed 00:15:26.849 Test: blockdev nvme admin passthru ...passed 00:15:26.849 Test: blockdev copy ...passed 00:15:26.849 Suite: bdevio tests on: nvme0n1 00:15:26.849 Test: blockdev write read block ...passed 00:15:26.849 Test: blockdev write zeroes read block ...passed 00:15:26.849 Test: blockdev write zeroes read no split ...passed 00:15:26.849 Test: blockdev write zeroes read split ...passed 00:15:26.849 Test: blockdev write zeroes read split partial ...passed 00:15:26.849 Test: blockdev reset ...passed 00:15:26.849 Test: blockdev write read 8 blocks ...passed 00:15:26.849 Test: blockdev write read size > 128k ...passed 00:15:26.849 Test: blockdev write read invalid size ...passed 00:15:26.849 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:26.849 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:26.849 Test: blockdev write read max offset ...passed 00:15:26.849 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:26.849 Test: blockdev writev readv 8 blocks ...passed 00:15:26.849 Test: blockdev writev readv 30 x 1block ...passed 00:15:26.849 Test: blockdev writev readv block ...passed 00:15:26.849 Test: blockdev writev readv size > 128k ...passed 00:15:26.849 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:26.849 Test: blockdev comparev and writev ...passed 00:15:26.849 Test: blockdev nvme passthru rw ...passed 00:15:26.849 Test: blockdev nvme passthru vendor specific ...passed 00:15:26.849 Test: blockdev nvme admin passthru ...passed 00:15:26.849 Test: blockdev copy ...passed 00:15:26.849 00:15:26.849 Run Summary: Type Total Ran Passed Failed Inactive 00:15:26.849 suites 6 6 n/a 0 0 00:15:26.849 tests 138 138 138 0 0 00:15:26.849 asserts 780 780 780 0 n/a 00:15:26.849 00:15:26.849 Elapsed time = 0.598 seconds 00:15:26.849 0 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83521 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83521 ']' 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83521 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83521 00:15:26.849 killing process with pid 83521 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83521' 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83521 00:15:26.849 23:49:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83521 00:15:27.111 23:49:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:27.111 00:15:27.111 real 0m1.636s 00:15:27.111 user 0m3.914s 00:15:27.111 sys 0m0.406s 00:15:27.111 ************************************ 00:15:27.111 23:49:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.111 23:49:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:27.111 END TEST bdev_bounds 00:15:27.111 ************************************ 00:15:27.111 23:49:15 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:27.111 23:49:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:27.111 23:49:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.111 23:49:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.111 ************************************ 00:15:27.111 START TEST bdev_nbd 00:15:27.111 ************************************ 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83580 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83580 /var/tmp/spdk-nbd.sock 00:15:27.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83580 ']' 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:27.111 23:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:27.373 [2024-11-26 23:49:15.247647] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:15:27.373 [2024-11-26 23:49:15.247839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.373 [2024-11-26 23:49:15.404542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.373 [2024-11-26 23:49:15.452984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.318 1+0 records in 00:15:28.318 1+0 records out 00:15:28.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895444 s, 4.6 MB/s 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:28.318 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.580 1+0 records in 00:15:28.580 1+0 records out 00:15:28.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123188 s, 3.3 MB/s 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:28.580 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.842 1+0 records in 00:15:28.842 1+0 records out 00:15:28.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106012 s, 3.9 MB/s 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:28.842 23:49:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:29.105 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:29.105 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:29.105 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.106 1+0 records in 00:15:29.106 1+0 records out 00:15:29.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909577 s, 4.5 MB/s 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.106 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.368 1+0 records in 00:15:29.368 1+0 records out 00:15:29.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112199 s, 3.7 MB/s 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.368 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.629 1+0 records in 00:15:29.629 1+0 records out 00:15:29.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013657 s, 3.0 MB/s 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.629 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd0", 00:15:29.891 "bdev_name": "nvme0n1" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd1", 00:15:29.891 "bdev_name": "nvme0n2" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd2", 00:15:29.891 "bdev_name": "nvme0n3" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd3", 00:15:29.891 "bdev_name": "nvme1n1" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd4", 00:15:29.891 "bdev_name": "nvme2n1" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd5", 00:15:29.891 "bdev_name": "nvme3n1" 00:15:29.891 } 00:15:29.891 ]' 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd0", 00:15:29.891 "bdev_name": "nvme0n1" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd1", 00:15:29.891 "bdev_name": "nvme0n2" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd2", 00:15:29.891 "bdev_name": "nvme0n3" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd3", 00:15:29.891 "bdev_name": "nvme1n1" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd4", 00:15:29.891 "bdev_name": "nvme2n1" 00:15:29.891 }, 00:15:29.891 { 00:15:29.891 "nbd_device": "/dev/nbd5", 00:15:29.891 "bdev_name": "nvme3n1" 00:15:29.891 } 00:15:29.891 ]' 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.891 23:49:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.152 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.153 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.414 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.672 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.929 23:49:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.929 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:31.187 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:31.187 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:31.187 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:31.187 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.187 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.187 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:31.187 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.187 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.188 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:31.188 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.188 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:31.444 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:31.445 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:31.705 /dev/nbd0 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.705 1+0 records in 00:15:31.705 1+0 records out 00:15:31.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869539 s, 4.7 MB/s 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:31.705 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:31.967 /dev/nbd1 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.967 1+0 records in 00:15:31.967 1+0 records out 00:15:31.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109174 s, 3.8 MB/s 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:31.967 23:49:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:32.228 /dev/nbd10 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.228 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.229 1+0 records in 00:15:32.229 1+0 records out 00:15:32.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104981 s, 3.9 MB/s 00:15:32.229 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.229 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.229 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.229 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.229 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.229 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.229 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:32.229 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:32.489 /dev/nbd11 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.489 1+0 records in 00:15:32.489 1+0 records out 00:15:32.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111001 s, 3.7 MB/s 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:32.489 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:32.749 /dev/nbd12 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.749 1+0 records in 00:15:32.749 1+0 records out 00:15:32.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140654 s, 2.9 MB/s 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:32.749 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:33.011 /dev/nbd13 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.011 1+0 records in 00:15:33.011 1+0 records out 00:15:33.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123409 s, 3.3 MB/s 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.011 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:33.012 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:33.012 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.012 23:49:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd0", 00:15:33.274 "bdev_name": "nvme0n1" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd1", 00:15:33.274 "bdev_name": "nvme0n2" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd10", 00:15:33.274 "bdev_name": "nvme0n3" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd11", 00:15:33.274 "bdev_name": "nvme1n1" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd12", 00:15:33.274 "bdev_name": "nvme2n1" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd13", 00:15:33.274 "bdev_name": "nvme3n1" 00:15:33.274 } 00:15:33.274 ]' 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd0", 00:15:33.274 "bdev_name": "nvme0n1" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd1", 00:15:33.274 "bdev_name": "nvme0n2" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd10", 00:15:33.274 "bdev_name": "nvme0n3" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd11", 00:15:33.274 "bdev_name": "nvme1n1" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd12", 00:15:33.274 "bdev_name": "nvme2n1" 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "nbd_device": "/dev/nbd13", 00:15:33.274 "bdev_name": "nvme3n1" 00:15:33.274 } 00:15:33.274 ]' 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:33.274 /dev/nbd1 00:15:33.274 /dev/nbd10 00:15:33.274 /dev/nbd11 00:15:33.274 /dev/nbd12 00:15:33.274 /dev/nbd13' 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:33.274 /dev/nbd1 00:15:33.274 /dev/nbd10 00:15:33.274 /dev/nbd11 00:15:33.274 /dev/nbd12 00:15:33.274 /dev/nbd13' 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:33.274 256+0 records in 00:15:33.274 256+0 records out 00:15:33.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113958 s, 92.0 MB/s 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:33.274 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:33.536 256+0 records in 00:15:33.536 256+0 records out 00:15:33.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221053 s, 4.7 MB/s 00:15:33.536 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:33.536 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:33.798 256+0 records in 00:15:33.798 256+0 records out 00:15:33.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201836 s, 5.2 MB/s 00:15:33.798 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:33.798 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:34.058 256+0 records in 00:15:34.058 256+0 records out 00:15:34.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233847 s, 4.5 MB/s 00:15:34.058 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.058 23:49:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:34.058 256+0 records in 00:15:34.058 256+0 records out 00:15:34.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245775 s, 4.3 MB/s 00:15:34.058 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.058 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:34.319 256+0 records in 00:15:34.319 256+0 records out 00:15:34.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.208327 s, 5.0 MB/s 00:15:34.319 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.319 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:34.581 256+0 records in 00:15:34.581 256+0 records out 00:15:34.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.277049 s, 3.8 MB/s 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.581 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.843 23:49:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.104 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.364 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.624 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.885 23:49:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.147 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:36.408 malloc_lvol_verify 00:15:36.408 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:36.669 7052962a-c191-4a09-bb00-784c6772e14f 00:15:36.669 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:36.930 76a1ca07-366a-4fd0-b0d9-c1aec4d7ed80 00:15:36.930 23:49:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:37.190 /dev/nbd0 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:37.190 mke2fs 1.47.0 (5-Feb-2023) 00:15:37.190 Discarding device blocks: 0/4096 done 00:15:37.190 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:37.190 00:15:37.190 Allocating group tables: 0/1 done 00:15:37.190 Writing inode tables: 0/1 done 00:15:37.190 Creating journal (1024 blocks): done 00:15:37.190 Writing superblocks and filesystem accounting information: 0/1 done 00:15:37.190 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.190 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83580 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83580 ']' 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83580 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83580 00:15:37.451 killing process with pid 83580 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83580' 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83580 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83580 00:15:37.451 ************************************ 00:15:37.451 END TEST bdev_nbd 00:15:37.451 ************************************ 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:37.451 00:15:37.451 real 0m10.351s 00:15:37.451 user 0m14.008s 00:15:37.451 sys 0m3.851s 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.451 23:49:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:37.451 23:49:25 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:37.451 23:49:25 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:37.451 23:49:25 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:37.451 23:49:25 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:37.451 23:49:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.451 23:49:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.451 23:49:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.711 ************************************ 00:15:37.711 START TEST bdev_fio 00:15:37.711 ************************************ 00:15:37.711 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:37.711 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:37.711 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:37.711 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:37.712 ************************************ 00:15:37.712 START TEST bdev_fio_rw_verify 00:15:37.712 ************************************ 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:37.712 23:49:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:37.974 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:37.974 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:37.974 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:37.974 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:37.974 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:37.974 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:37.974 fio-3.35 00:15:37.974 Starting 6 threads 00:15:50.238 00:15:50.238 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83982: Tue Nov 26 23:49:36 2024 00:15:50.238 read: IOPS=14.1k, BW=55.2MiB/s (57.9MB/s)(552MiB/10002msec) 00:15:50.238 slat (usec): min=2, max=3979, avg= 7.41, stdev=21.86 00:15:50.238 clat (usec): min=92, max=8495, avg=1380.65, stdev=791.32 00:15:50.238 lat (usec): min=103, max=8521, avg=1388.06, stdev=792.12 00:15:50.238 clat percentiles (usec): 00:15:50.238 | 50.000th=[ 1287], 99.000th=[ 3818], 99.900th=[ 5669], 99.990th=[ 7439], 00:15:50.238 | 99.999th=[ 8455] 00:15:50.238 write: IOPS=14.4k, BW=56.1MiB/s (58.8MB/s)(561MiB/10002msec); 0 zone resets 00:15:50.238 slat (usec): min=13, max=4138, avg=43.61, stdev=145.36 00:15:50.238 clat (usec): min=90, max=9320, avg=1641.66, stdev=846.64 00:15:50.238 lat (usec): min=107, max=9336, avg=1685.26, stdev=859.10 00:15:50.238 clat percentiles (usec): 00:15:50.238 | 50.000th=[ 1516], 99.000th=[ 4228], 99.900th=[ 5735], 99.990th=[ 6915], 00:15:50.238 | 99.999th=[ 9241] 00:15:50.238 bw ( KiB/s): min=47982, max=94174, per=100.00%, avg=57484.53, stdev=1861.57, samples=114 00:15:50.238 iops : min=11991, max=23542, avg=14370.21, stdev=465.39, samples=114 00:15:50.238 lat (usec) : 100=0.01%, 250=2.06%, 500=6.68%, 750=8.60%, 1000=10.95% 00:15:50.238 lat (msec) : 2=48.47%, 4=22.17%, 10=1.08% 00:15:50.238 cpu : usr=43.23%, sys=32.54%, ctx=5611, majf=0, minf=16230 00:15:50.238 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.238 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.238 issued rwts: total=141312,143700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.238 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:50.238 00:15:50.238 Run status group 0 (all jobs): 00:15:50.238 READ: bw=55.2MiB/s (57.9MB/s), 55.2MiB/s-55.2MiB/s (57.9MB/s-57.9MB/s), io=552MiB (579MB), run=10002-10002msec 00:15:50.238 WRITE: bw=56.1MiB/s (58.8MB/s), 56.1MiB/s-56.1MiB/s (58.8MB/s-58.8MB/s), io=561MiB (589MB), run=10002-10002msec 00:15:50.239 ----------------------------------------------------- 00:15:50.239 Suppressions used: 00:15:50.239 count bytes template 00:15:50.239 6 48 /usr/src/fio/parse.c 00:15:50.239 2302 220992 /usr/src/fio/iolog.c 00:15:50.239 1 8 libtcmalloc_minimal.so 00:15:50.239 1 904 libcrypto.so 00:15:50.239 ----------------------------------------------------- 00:15:50.239 00:15:50.239 00:15:50.239 real 0m11.337s 00:15:50.239 user 0m26.740s 00:15:50.239 sys 0m19.934s 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.239 ************************************ 00:15:50.239 END TEST bdev_fio_rw_verify 00:15:50.239 ************************************ 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1947d359-6311-44af-af51-a2aafe97ee81"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1947d359-6311-44af-af51-a2aafe97ee81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "ffd906bc-e295-425d-89cf-4d2aa240ffe5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffd906bc-e295-425d-89cf-4d2aa240ffe5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "3d686344-4531-4ae5-947a-08be2e7c45d7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d686344-4531-4ae5-947a-08be2e7c45d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dd0e50a7-332b-4985-89fa-0f15e7cca0f4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dd0e50a7-332b-4985-89fa-0f15e7cca0f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "3f0bfdb2-9e79-469e-badf-ec514c0e30f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3f0bfdb2-9e79-469e-badf-ec514c0e30f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "053b1e88-c0dc-4892-b0f5-676d1a0538eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "053b1e88-c0dc-4892-b0f5-676d1a0538eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:50.239 /home/vagrant/spdk_repo/spdk 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:50.239 00:15:50.239 real 0m11.525s 00:15:50.239 user 0m26.811s 00:15:50.239 sys 0m20.027s 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.239 ************************************ 00:15:50.239 END TEST bdev_fio 00:15:50.239 ************************************ 00:15:50.239 23:49:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:50.239 23:49:37 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:50.239 23:49:37 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:50.239 23:49:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:50.239 23:49:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.239 23:49:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.239 ************************************ 00:15:50.239 START TEST bdev_verify 00:15:50.239 ************************************ 00:15:50.239 23:49:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:50.239 [2024-11-26 23:49:37.260693] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:15:50.239 [2024-11-26 23:49:37.260862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84149 ] 00:15:50.239 [2024-11-26 23:49:37.411520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:50.239 [2024-11-26 23:49:37.454025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.239 [2024-11-26 23:49:37.454130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.239 Running I/O for 5 seconds... 00:15:52.194 23040.00 IOPS, 90.00 MiB/s [2024-11-26T23:49:41.267Z] 22880.00 IOPS, 89.38 MiB/s [2024-11-26T23:49:42.212Z] 23242.67 IOPS, 90.79 MiB/s [2024-11-26T23:49:43.155Z] 23096.00 IOPS, 90.22 MiB/s [2024-11-26T23:49:43.155Z] 23028.00 IOPS, 89.95 MiB/s 00:15:55.024 Latency(us) 00:15:55.024 [2024-11-26T23:49:43.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.024 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x0 length 0x80000 00:15:55.024 nvme0n1 : 5.02 1835.24 7.17 0.00 0.00 69609.47 10586.58 72190.42 00:15:55.024 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x80000 length 0x80000 00:15:55.024 nvme0n1 : 5.03 1833.75 7.16 0.00 0.00 69667.81 7360.20 79046.50 00:15:55.024 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x0 length 0x80000 00:15:55.024 nvme0n2 : 5.07 1842.77 7.20 0.00 0.00 69188.98 9779.99 72190.42 00:15:55.024 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x80000 length 0x80000 00:15:55.024 nvme0n2 : 5.05 1798.55 7.03 0.00 0.00 70890.35 11645.24 79853.10 00:15:55.024 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x0 length 0x80000 00:15:55.024 nvme0n3 : 5.07 1841.20 7.19 0.00 0.00 69104.91 10889.06 72997.02 00:15:55.024 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x80000 length 0x80000 00:15:55.024 nvme0n3 : 5.04 1801.54 7.04 0.00 0.00 70629.52 13510.50 76223.41 00:15:55.024 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x0 length 0x20000 00:15:55.024 nvme1n1 : 5.09 1837.51 7.18 0.00 0.00 69124.07 10284.11 69770.63 00:15:55.024 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x20000 length 0x20000 00:15:55.024 nvme1n1 : 5.05 1800.79 7.03 0.00 0.00 70514.71 6755.25 64931.05 00:15:55.024 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x0 length 0xa0000 00:15:55.024 nvme2n1 : 5.05 1849.25 7.22 0.00 0.00 68537.84 10132.87 69367.34 00:15:55.024 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0xa0000 length 0xa0000 00:15:55.024 nvme2n1 : 5.06 1819.67 7.11 0.00 0.00 69646.17 7763.50 65737.65 00:15:55.024 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0x0 length 0xbd0bd 00:15:55.024 nvme3n1 : 5.09 2183.56 8.53 0.00 0.00 57765.63 2974.33 129862.10 00:15:55.024 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.024 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:55.024 nvme3n1 : 5.08 2386.63 9.32 0.00 0.00 52908.68 529.33 132281.90 00:15:55.024 [2024-11-26T23:49:43.155Z] =================================================================================================================== 00:15:55.024 [2024-11-26T23:49:43.155Z] Total : 22830.46 89.18 0.00 0.00 66776.78 529.33 132281.90 00:15:55.024 00:15:55.024 real 0m5.911s 00:15:55.024 user 0m9.430s 00:15:55.024 sys 0m1.464s 00:15:55.024 23:49:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.024 23:49:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:55.024 ************************************ 00:15:55.024 END TEST bdev_verify 00:15:55.024 ************************************ 00:15:55.024 23:49:43 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:55.024 23:49:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:55.024 23:49:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.024 23:49:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.286 ************************************ 00:15:55.286 START TEST bdev_verify_big_io 00:15:55.286 ************************************ 00:15:55.286 23:49:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:55.286 [2024-11-26 23:49:43.235043] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:15:55.286 [2024-11-26 23:49:43.235183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84237 ] 00:15:55.286 [2024-11-26 23:49:43.381359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:55.286 [2024-11-26 23:49:43.411535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.286 [2024-11-26 23:49:43.411851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.860 Running I/O for 5 seconds... 00:16:01.726 1916.00 IOPS, 119.75 MiB/s [2024-11-26T23:49:49.857Z] 3091.00 IOPS, 193.19 MiB/s [2024-11-26T23:49:50.429Z] 2629.33 IOPS, 164.33 MiB/s 00:16:02.298 Latency(us) 00:16:02.298 [2024-11-26T23:49:50.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.298 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x0 length 0x8000 00:16:02.298 nvme0n1 : 5.85 87.49 5.47 0.00 0.00 1385901.93 7511.43 1677721.60 00:16:02.298 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x8000 length 0x8000 00:16:02.298 nvme0n1 : 5.86 131.05 8.19 0.00 0.00 959807.57 11846.89 942105.21 00:16:02.298 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x0 length 0x8000 00:16:02.298 nvme0n2 : 5.85 60.13 3.76 0.00 0.00 1921486.88 162125.98 3097332.18 00:16:02.298 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x8000 length 0x8000 00:16:02.298 nvme0n2 : 5.72 111.89 6.99 0.00 0.00 1094995.02 79853.10 890483.00 00:16:02.298 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x0 length 0x8000 00:16:02.298 nvme0n3 : 5.92 97.23 6.08 0.00 0.00 1129670.50 67754.14 1432516.14 00:16:02.298 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x8000 length 0x8000 00:16:02.298 nvme0n3 : 5.84 123.25 7.70 0.00 0.00 960175.78 115343.36 967916.31 00:16:02.298 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x0 length 0x2000 00:16:02.298 nvme1n1 : 6.04 135.19 8.45 0.00 0.00 783781.34 22181.42 1013085.74 00:16:02.298 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x2000 length 0x2000 00:16:02.298 nvme1n1 : 5.73 108.99 6.81 0.00 0.00 1054951.86 83079.48 2284282.49 00:16:02.298 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x0 length 0xa000 00:16:02.298 nvme2n1 : 6.16 154.92 9.68 0.00 0.00 655000.56 5747.00 1793871.56 00:16:02.298 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0xa000 length 0xa000 00:16:02.298 nvme2n1 : 5.85 120.36 7.52 0.00 0.00 924183.06 95178.44 896935.78 00:16:02.298 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0x0 length 0xbd0b 00:16:02.298 nvme3n1 : 6.46 284.35 17.77 0.00 0.00 342822.98 2155.13 2594015.70 00:16:02.298 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:02.298 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:02.298 nvme3n1 : 5.86 166.69 10.42 0.00 0.00 654270.49 2671.85 967916.31 00:16:02.298 [2024-11-26T23:49:50.429Z] =================================================================================================================== 00:16:02.298 [2024-11-26T23:49:50.429Z] Total : 1581.54 98.85 0.00 0.00 844823.05 2155.13 3097332.18 00:16:02.298 00:16:02.298 real 0m7.217s 00:16:02.298 user 0m13.343s 00:16:02.298 sys 0m0.432s 00:16:02.298 23:49:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.298 23:49:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.298 ************************************ 00:16:02.298 END TEST bdev_verify_big_io 00:16:02.298 ************************************ 00:16:02.560 23:49:50 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:02.560 23:49:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:02.560 23:49:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.560 23:49:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:02.560 ************************************ 00:16:02.560 START TEST bdev_write_zeroes 00:16:02.560 ************************************ 00:16:02.560 23:49:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:02.560 [2024-11-26 23:49:50.529219] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:16:02.560 [2024-11-26 23:49:50.529377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84338 ] 00:16:02.560 [2024-11-26 23:49:50.678979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.821 [2024-11-26 23:49:50.707869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.084 Running I/O for 1 seconds... 00:16:04.049 65184.00 IOPS, 254.62 MiB/s 00:16:04.049 Latency(us) 00:16:04.049 [2024-11-26T23:49:52.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.049 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:04.049 nvme0n1 : 1.02 10672.56 41.69 0.00 0.00 11981.96 5494.94 21072.34 00:16:04.049 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:04.049 nvme0n2 : 1.01 10723.77 41.89 0.00 0.00 11914.27 5494.94 20366.57 00:16:04.049 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:04.049 nvme0n3 : 1.02 10711.02 41.84 0.00 0.00 11913.82 4990.82 20064.10 00:16:04.049 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:04.049 nvme1n1 : 1.02 10698.92 41.79 0.00 0.00 11917.15 5016.02 20669.05 00:16:04.049 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:04.049 nvme2n1 : 1.02 10686.87 41.75 0.00 0.00 11919.09 4864.79 21475.64 00:16:04.049 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:04.049 nvme3n1 : 1.03 11495.75 44.91 0.00 0.00 11066.30 4763.96 19862.45 00:16:04.049 [2024-11-26T23:49:52.180Z] =================================================================================================================== 00:16:04.049 [2024-11-26T23:49:52.180Z] Total : 64988.89 253.86 0.00 0.00 11775.35 4763.96 21475.64 00:16:04.310 00:16:04.311 real 0m1.816s 00:16:04.311 user 0m1.142s 00:16:04.311 sys 0m0.478s 00:16:04.311 23:49:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.311 ************************************ 00:16:04.311 END TEST bdev_write_zeroes 00:16:04.311 ************************************ 00:16:04.311 23:49:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:04.311 23:49:52 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.311 23:49:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:04.311 23:49:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.311 23:49:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.311 ************************************ 00:16:04.311 START TEST bdev_json_nonenclosed 00:16:04.311 ************************************ 00:16:04.311 23:49:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.311 [2024-11-26 23:49:52.416054] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:16:04.311 [2024-11-26 23:49:52.416201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84379 ] 00:16:04.573 [2024-11-26 23:49:52.564909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.573 [2024-11-26 23:49:52.606784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.573 [2024-11-26 23:49:52.606916] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:04.573 [2024-11-26 23:49:52.606936] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:04.573 [2024-11-26 23:49:52.606951] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:04.573 00:16:04.573 real 0m0.358s 00:16:04.573 user 0m0.148s 00:16:04.573 sys 0m0.106s 00:16:04.573 23:49:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.573 ************************************ 00:16:04.573 END TEST bdev_json_nonenclosed 00:16:04.573 ************************************ 00:16:04.573 23:49:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:04.835 23:49:52 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.835 23:49:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:04.835 23:49:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.835 23:49:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.835 ************************************ 00:16:04.835 START TEST bdev_json_nonarray 00:16:04.835 ************************************ 00:16:04.835 23:49:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.835 [2024-11-26 23:49:52.841353] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:16:04.835 [2024-11-26 23:49:52.841508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84403 ] 00:16:05.096 [2024-11-26 23:49:52.986947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.096 [2024-11-26 23:49:53.028347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.096 [2024-11-26 23:49:53.028470] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:05.096 [2024-11-26 23:49:53.028490] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:05.096 [2024-11-26 23:49:53.028508] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:05.096 00:16:05.096 real 0m0.356s 00:16:05.096 user 0m0.135s 00:16:05.096 sys 0m0.116s 00:16:05.096 23:49:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.096 23:49:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:05.096 ************************************ 00:16:05.096 END TEST bdev_json_nonarray 00:16:05.096 ************************************ 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:05.096 23:49:53 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:05.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:15.670 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.670 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.670 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.670 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.670 00:16:15.670 real 0m54.056s 00:16:15.670 user 1m13.646s 00:16:15.670 sys 0m54.462s 00:16:15.670 23:50:03 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.670 23:50:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.670 ************************************ 00:16:15.670 END TEST blockdev_xnvme 00:16:15.670 ************************************ 00:16:15.670 23:50:03 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:15.670 23:50:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.670 23:50:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.670 23:50:03 -- common/autotest_common.sh@10 -- # set +x 00:16:15.670 ************************************ 00:16:15.670 START TEST ublk 00:16:15.670 ************************************ 00:16:15.670 23:50:03 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:15.957 * Looking for test storage... 00:16:15.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:15.957 23:50:03 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.957 23:50:03 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.957 23:50:03 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.957 23:50:03 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.957 23:50:03 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.957 23:50:03 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.957 23:50:03 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.957 23:50:03 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.957 23:50:03 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.957 23:50:03 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.957 23:50:03 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.957 23:50:03 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:15.957 23:50:03 ublk -- scripts/common.sh@345 -- # : 1 00:16:15.957 23:50:03 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.957 23:50:03 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.957 23:50:03 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:15.957 23:50:03 ublk -- scripts/common.sh@353 -- # local d=1 00:16:15.957 23:50:03 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.957 23:50:03 ublk -- scripts/common.sh@355 -- # echo 1 00:16:15.957 23:50:03 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.957 23:50:03 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:15.957 23:50:03 ublk -- scripts/common.sh@353 -- # local d=2 00:16:15.957 23:50:03 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.957 23:50:03 ublk -- scripts/common.sh@355 -- # echo 2 00:16:15.957 23:50:03 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.957 23:50:03 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.957 23:50:03 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.957 23:50:03 ublk -- scripts/common.sh@368 -- # return 0 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:15.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.957 --rc genhtml_branch_coverage=1 00:16:15.957 --rc genhtml_function_coverage=1 00:16:15.957 --rc genhtml_legend=1 00:16:15.957 --rc geninfo_all_blocks=1 00:16:15.957 --rc geninfo_unexecuted_blocks=1 00:16:15.957 00:16:15.957 ' 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:15.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.957 --rc genhtml_branch_coverage=1 00:16:15.957 --rc genhtml_function_coverage=1 00:16:15.957 --rc genhtml_legend=1 00:16:15.957 --rc geninfo_all_blocks=1 00:16:15.957 --rc geninfo_unexecuted_blocks=1 00:16:15.957 00:16:15.957 ' 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:15.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.957 --rc genhtml_branch_coverage=1 00:16:15.957 --rc genhtml_function_coverage=1 00:16:15.957 --rc genhtml_legend=1 00:16:15.957 --rc geninfo_all_blocks=1 00:16:15.957 --rc geninfo_unexecuted_blocks=1 00:16:15.957 00:16:15.957 ' 00:16:15.957 23:50:03 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:15.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.957 --rc genhtml_branch_coverage=1 00:16:15.957 --rc genhtml_function_coverage=1 00:16:15.957 --rc genhtml_legend=1 00:16:15.957 --rc geninfo_all_blocks=1 00:16:15.957 --rc geninfo_unexecuted_blocks=1 00:16:15.957 00:16:15.957 ' 00:16:15.957 23:50:03 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:15.957 23:50:03 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:15.957 23:50:03 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:15.957 23:50:03 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:15.957 23:50:03 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:15.958 23:50:03 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:15.958 23:50:03 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:15.958 23:50:03 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:15.958 23:50:03 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:15.958 23:50:03 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:15.958 23:50:03 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.958 23:50:03 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.958 23:50:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 ************************************ 00:16:15.958 START TEST test_save_ublk_config 00:16:15.958 ************************************ 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=84709 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 84709 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84709 ']' 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.958 23:50:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 [2024-11-26 23:50:03.999656] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:16:15.958 [2024-11-26 23:50:03.999786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84709 ] 00:16:16.219 [2024-11-26 23:50:04.147293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.219 [2024-11-26 23:50:04.174632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:16.790 [2024-11-26 23:50:04.806819] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:16.790 [2024-11-26 23:50:04.808019] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:16.790 malloc0 00:16:16.790 [2024-11-26 23:50:04.846946] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:16.790 [2024-11-26 23:50:04.847041] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:16.790 [2024-11-26 23:50:04.847055] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:16.790 [2024-11-26 23:50:04.847074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:16.790 [2024-11-26 23:50:04.855972] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:16.790 [2024-11-26 23:50:04.856016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:16.790 [2024-11-26 23:50:04.862832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:16.790 [2024-11-26 23:50:04.862973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:16.790 [2024-11-26 23:50:04.879818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:16.790 0 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.790 23:50:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:17.052 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.052 23:50:05 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:17.052 "subsystems": [ 00:16:17.052 { 00:16:17.052 "subsystem": "fsdev", 00:16:17.052 "config": [ 00:16:17.052 { 00:16:17.052 "method": "fsdev_set_opts", 00:16:17.052 "params": { 00:16:17.052 "fsdev_io_pool_size": 65535, 00:16:17.052 "fsdev_io_cache_size": 256 00:16:17.052 } 00:16:17.052 } 00:16:17.052 ] 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "keyring", 00:16:17.052 "config": [] 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "iobuf", 00:16:17.052 "config": [ 00:16:17.052 { 00:16:17.052 "method": "iobuf_set_options", 00:16:17.052 "params": { 00:16:17.052 "small_pool_count": 8192, 00:16:17.052 "large_pool_count": 1024, 00:16:17.052 "small_bufsize": 8192, 00:16:17.052 "large_bufsize": 135168, 00:16:17.052 "enable_numa": false 00:16:17.052 } 00:16:17.052 } 00:16:17.052 ] 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "sock", 00:16:17.052 "config": [ 00:16:17.052 { 00:16:17.052 "method": "sock_set_default_impl", 00:16:17.052 "params": { 00:16:17.052 "impl_name": "posix" 00:16:17.052 } 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "method": "sock_impl_set_options", 00:16:17.052 "params": { 00:16:17.052 "impl_name": "ssl", 00:16:17.052 "recv_buf_size": 4096, 00:16:17.052 "send_buf_size": 4096, 00:16:17.052 "enable_recv_pipe": true, 00:16:17.052 "enable_quickack": false, 00:16:17.052 "enable_placement_id": 0, 00:16:17.052 "enable_zerocopy_send_server": true, 00:16:17.052 "enable_zerocopy_send_client": false, 00:16:17.052 "zerocopy_threshold": 0, 00:16:17.052 "tls_version": 0, 00:16:17.052 "enable_ktls": false 00:16:17.052 } 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "method": "sock_impl_set_options", 00:16:17.052 "params": { 00:16:17.052 "impl_name": "posix", 00:16:17.052 "recv_buf_size": 2097152, 00:16:17.052 "send_buf_size": 2097152, 00:16:17.052 "enable_recv_pipe": true, 00:16:17.052 "enable_quickack": false, 00:16:17.052 "enable_placement_id": 0, 00:16:17.052 "enable_zerocopy_send_server": true, 00:16:17.052 "enable_zerocopy_send_client": false, 00:16:17.052 "zerocopy_threshold": 0, 00:16:17.052 "tls_version": 0, 00:16:17.052 "enable_ktls": false 00:16:17.052 } 00:16:17.052 } 00:16:17.052 ] 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "vmd", 00:16:17.052 "config": [] 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "accel", 00:16:17.052 "config": [ 00:16:17.052 { 00:16:17.052 "method": "accel_set_options", 00:16:17.052 "params": { 00:16:17.052 "small_cache_size": 128, 00:16:17.052 "large_cache_size": 16, 00:16:17.052 "task_count": 2048, 00:16:17.052 "sequence_count": 2048, 00:16:17.052 "buf_count": 2048 00:16:17.052 } 00:16:17.052 } 00:16:17.052 ] 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "bdev", 00:16:17.052 "config": [ 00:16:17.052 { 00:16:17.052 "method": "bdev_set_options", 00:16:17.052 "params": { 00:16:17.052 "bdev_io_pool_size": 65535, 00:16:17.052 "bdev_io_cache_size": 256, 00:16:17.052 "bdev_auto_examine": true, 00:16:17.052 "iobuf_small_cache_size": 128, 00:16:17.052 "iobuf_large_cache_size": 16 00:16:17.052 } 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "method": "bdev_raid_set_options", 00:16:17.052 "params": { 00:16:17.052 "process_window_size_kb": 1024, 00:16:17.052 "process_max_bandwidth_mb_sec": 0 00:16:17.052 } 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "method": "bdev_iscsi_set_options", 00:16:17.052 "params": { 00:16:17.052 "timeout_sec": 30 00:16:17.052 } 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "method": "bdev_nvme_set_options", 00:16:17.052 "params": { 00:16:17.052 "action_on_timeout": "none", 00:16:17.052 "timeout_us": 0, 00:16:17.052 "timeout_admin_us": 0, 00:16:17.052 "keep_alive_timeout_ms": 10000, 00:16:17.052 "arbitration_burst": 0, 00:16:17.052 "low_priority_weight": 0, 00:16:17.052 "medium_priority_weight": 0, 00:16:17.052 "high_priority_weight": 0, 00:16:17.052 "nvme_adminq_poll_period_us": 10000, 00:16:17.052 "nvme_ioq_poll_period_us": 0, 00:16:17.052 "io_queue_requests": 0, 00:16:17.052 "delay_cmd_submit": true, 00:16:17.052 "transport_retry_count": 4, 00:16:17.052 "bdev_retry_count": 3, 00:16:17.052 "transport_ack_timeout": 0, 00:16:17.052 "ctrlr_loss_timeout_sec": 0, 00:16:17.052 "reconnect_delay_sec": 0, 00:16:17.052 "fast_io_fail_timeout_sec": 0, 00:16:17.052 "disable_auto_failback": false, 00:16:17.052 "generate_uuids": false, 00:16:17.052 "transport_tos": 0, 00:16:17.052 "nvme_error_stat": false, 00:16:17.052 "rdma_srq_size": 0, 00:16:17.052 "io_path_stat": false, 00:16:17.052 "allow_accel_sequence": false, 00:16:17.052 "rdma_max_cq_size": 0, 00:16:17.052 "rdma_cm_event_timeout_ms": 0, 00:16:17.052 "dhchap_digests": [ 00:16:17.052 "sha256", 00:16:17.052 "sha384", 00:16:17.052 "sha512" 00:16:17.052 ], 00:16:17.052 "dhchap_dhgroups": [ 00:16:17.052 "null", 00:16:17.052 "ffdhe2048", 00:16:17.052 "ffdhe3072", 00:16:17.052 "ffdhe4096", 00:16:17.052 "ffdhe6144", 00:16:17.052 "ffdhe8192" 00:16:17.052 ] 00:16:17.052 } 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "method": "bdev_nvme_set_hotplug", 00:16:17.052 "params": { 00:16:17.052 "period_us": 100000, 00:16:17.052 "enable": false 00:16:17.052 } 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "method": "bdev_malloc_create", 00:16:17.052 "params": { 00:16:17.052 "name": "malloc0", 00:16:17.052 "num_blocks": 8192, 00:16:17.052 "block_size": 4096, 00:16:17.052 "physical_block_size": 4096, 00:16:17.052 "uuid": "1c77c27a-440a-42d0-be9f-759de4723168", 00:16:17.052 "optimal_io_boundary": 0, 00:16:17.052 "md_size": 0, 00:16:17.052 "dif_type": 0, 00:16:17.052 "dif_is_head_of_md": false, 00:16:17.052 "dif_pi_format": 0 00:16:17.052 } 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "method": "bdev_wait_for_examine" 00:16:17.052 } 00:16:17.052 ] 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "scsi", 00:16:17.052 "config": null 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "scheduler", 00:16:17.052 "config": [ 00:16:17.052 { 00:16:17.052 "method": "framework_set_scheduler", 00:16:17.052 "params": { 00:16:17.052 "name": "static" 00:16:17.052 } 00:16:17.052 } 00:16:17.052 ] 00:16:17.052 }, 00:16:17.052 { 00:16:17.052 "subsystem": "vhost_scsi", 00:16:17.053 "config": [] 00:16:17.053 }, 00:16:17.053 { 00:16:17.053 "subsystem": "vhost_blk", 00:16:17.053 "config": [] 00:16:17.053 }, 00:16:17.053 { 00:16:17.053 "subsystem": "ublk", 00:16:17.053 "config": [ 00:16:17.053 { 00:16:17.053 "method": "ublk_create_target", 00:16:17.053 "params": { 00:16:17.053 "cpumask": "1" 00:16:17.053 } 00:16:17.053 }, 00:16:17.053 { 00:16:17.053 "method": "ublk_start_disk", 00:16:17.053 "params": { 00:16:17.053 "bdev_name": "malloc0", 00:16:17.053 "ublk_id": 0, 00:16:17.053 "num_queues": 1, 00:16:17.053 "queue_depth": 128 00:16:17.053 } 00:16:17.053 } 00:16:17.053 ] 00:16:17.053 }, 00:16:17.053 { 00:16:17.053 "subsystem": "nbd", 00:16:17.053 "config": [] 00:16:17.053 }, 00:16:17.053 { 00:16:17.053 "subsystem": "nvmf", 00:16:17.053 "config": [ 00:16:17.053 { 00:16:17.053 "method": "nvmf_set_config", 00:16:17.053 "params": { 00:16:17.053 "discovery_filter": "match_any", 00:16:17.053 "admin_cmd_passthru": { 00:16:17.053 "identify_ctrlr": false 00:16:17.053 }, 00:16:17.053 "dhchap_digests": [ 00:16:17.053 "sha256", 00:16:17.053 "sha384", 00:16:17.053 "sha512" 00:16:17.053 ], 00:16:17.053 "dhchap_dhgroups": [ 00:16:17.053 "null", 00:16:17.053 "ffdhe2048", 00:16:17.053 "ffdhe3072", 00:16:17.053 "ffdhe4096", 00:16:17.053 "ffdhe6144", 00:16:17.053 "ffdhe8192" 00:16:17.053 ] 00:16:17.053 } 00:16:17.053 }, 00:16:17.053 { 00:16:17.053 "method": "nvmf_set_max_subsystems", 00:16:17.053 "params": { 00:16:17.053 "max_subsystems": 1024 00:16:17.053 } 00:16:17.053 }, 00:16:17.053 { 00:16:17.053 "method": "nvmf_set_crdt", 00:16:17.053 "params": { 00:16:17.053 "crdt1": 0, 00:16:17.053 "crdt2": 0, 00:16:17.053 "crdt3": 0 00:16:17.053 } 00:16:17.053 } 00:16:17.053 ] 00:16:17.053 }, 00:16:17.053 { 00:16:17.053 "subsystem": "iscsi", 00:16:17.053 "config": [ 00:16:17.053 { 00:16:17.053 "method": "iscsi_set_options", 00:16:17.053 "params": { 00:16:17.053 "node_base": "iqn.2016-06.io.spdk", 00:16:17.053 "max_sessions": 128, 00:16:17.053 "max_connections_per_session": 2, 00:16:17.053 "max_queue_depth": 64, 00:16:17.053 "default_time2wait": 2, 00:16:17.053 "default_time2retain": 20, 00:16:17.053 "first_burst_length": 8192, 00:16:17.053 "immediate_data": true, 00:16:17.053 "allow_duplicated_isid": false, 00:16:17.053 "error_recovery_level": 0, 00:16:17.053 "nop_timeout": 60, 00:16:17.053 "nop_in_interval": 30, 00:16:17.053 "disable_chap": false, 00:16:17.053 "require_chap": false, 00:16:17.053 "mutual_chap": false, 00:16:17.053 "chap_group": 0, 00:16:17.053 "max_large_datain_per_connection": 64, 00:16:17.053 "max_r2t_per_connection": 4, 00:16:17.053 "pdu_pool_size": 36864, 00:16:17.053 "immediate_data_pool_size": 16384, 00:16:17.053 "data_out_pool_size": 2048 00:16:17.053 } 00:16:17.053 } 00:16:17.053 ] 00:16:17.053 } 00:16:17.053 ] 00:16:17.053 }' 00:16:17.053 23:50:05 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 84709 00:16:17.053 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84709 ']' 00:16:17.053 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84709 00:16:17.053 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:17.053 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.053 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84709 00:16:17.314 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.314 killing process with pid 84709 00:16:17.314 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.314 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84709' 00:16:17.314 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84709 00:16:17.314 23:50:05 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84709 00:16:17.314 [2024-11-26 23:50:05.440278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:17.592 [2024-11-26 23:50:05.469839] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:17.592 [2024-11-26 23:50:05.469969] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:17.592 [2024-11-26 23:50:05.478828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:17.592 [2024-11-26 23:50:05.478892] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:17.592 [2024-11-26 23:50:05.478899] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:17.592 [2024-11-26 23:50:05.478928] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:17.592 [2024-11-26 23:50:05.479071] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=84747 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 84747 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84747 ']' 00:16:18.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:18.165 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:18.165 "subsystems": [ 00:16:18.165 { 00:16:18.165 "subsystem": "fsdev", 00:16:18.165 "config": [ 00:16:18.165 { 00:16:18.165 "method": "fsdev_set_opts", 00:16:18.165 "params": { 00:16:18.165 "fsdev_io_pool_size": 65535, 00:16:18.165 "fsdev_io_cache_size": 256 00:16:18.165 } 00:16:18.165 } 00:16:18.165 ] 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "subsystem": "keyring", 00:16:18.165 "config": [] 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "subsystem": "iobuf", 00:16:18.165 "config": [ 00:16:18.165 { 00:16:18.165 "method": "iobuf_set_options", 00:16:18.165 "params": { 00:16:18.165 "small_pool_count": 8192, 00:16:18.165 "large_pool_count": 1024, 00:16:18.165 "small_bufsize": 8192, 00:16:18.165 "large_bufsize": 135168, 00:16:18.165 "enable_numa": false 00:16:18.165 } 00:16:18.165 } 00:16:18.165 ] 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "subsystem": "sock", 00:16:18.165 "config": [ 00:16:18.165 { 00:16:18.165 "method": "sock_set_default_impl", 00:16:18.165 "params": { 00:16:18.165 "impl_name": "posix" 00:16:18.165 } 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "method": "sock_impl_set_options", 00:16:18.165 "params": { 00:16:18.165 "impl_name": "ssl", 00:16:18.165 "recv_buf_size": 4096, 00:16:18.165 "send_buf_size": 4096, 00:16:18.165 "enable_recv_pipe": true, 00:16:18.165 "enable_quickack": false, 00:16:18.165 "enable_placement_id": 0, 00:16:18.165 "enable_zerocopy_send_server": true, 00:16:18.165 "enable_zerocopy_send_client": false, 00:16:18.165 "zerocopy_threshold": 0, 00:16:18.165 "tls_version": 0, 00:16:18.165 "enable_ktls": false 00:16:18.165 } 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "method": "sock_impl_set_options", 00:16:18.165 "params": { 00:16:18.165 "impl_name": "posix", 00:16:18.165 "recv_buf_size": 2097152, 00:16:18.165 "send_buf_size": 2097152, 00:16:18.165 "enable_recv_pipe": true, 00:16:18.165 "enable_quickack": false, 00:16:18.165 "enable_placement_id": 0, 00:16:18.165 "enable_zerocopy_send_server": true, 00:16:18.165 "enable_zerocopy_send_client": false, 00:16:18.165 "zerocopy_threshold": 0, 00:16:18.165 "tls_version": 0, 00:16:18.165 "enable_ktls": false 00:16:18.165 } 00:16:18.165 } 00:16:18.165 ] 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "subsystem": "vmd", 00:16:18.165 "config": [] 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "subsystem": "accel", 00:16:18.165 "config": [ 00:16:18.165 { 00:16:18.165 "method": "accel_set_options", 00:16:18.165 "params": { 00:16:18.165 "small_cache_size": 128, 00:16:18.165 "large_cache_size": 16, 00:16:18.165 "task_count": 2048, 00:16:18.165 "sequence_count": 2048, 00:16:18.165 "buf_count": 2048 00:16:18.165 } 00:16:18.165 } 00:16:18.165 ] 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "subsystem": "bdev", 00:16:18.165 "config": [ 00:16:18.165 { 00:16:18.165 "method": "bdev_set_options", 00:16:18.165 "params": { 00:16:18.165 "bdev_io_pool_size": 65535, 00:16:18.165 "bdev_io_cache_size": 256, 00:16:18.165 "bdev_auto_examine": true, 00:16:18.165 "iobuf_small_cache_size": 128, 00:16:18.165 "iobuf_large_cache_size": 16 00:16:18.165 } 00:16:18.165 }, 00:16:18.165 { 00:16:18.165 "method": "bdev_raid_set_options", 00:16:18.165 "params": { 00:16:18.165 "process_window_size_kb": 1024, 00:16:18.165 "process_max_bandwidth_mb_sec": 0 00:16:18.165 } 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "method": "bdev_iscsi_set_options", 00:16:18.166 "params": { 00:16:18.166 "timeout_sec": 30 00:16:18.166 } 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "method": "bdev_nvme_set_options", 00:16:18.166 "params": { 00:16:18.166 "action_on_timeout": "none", 00:16:18.166 "timeout_us": 0, 00:16:18.166 "timeout_admin_us": 0, 00:16:18.166 "keep_alive_timeout_ms": 10000, 00:16:18.166 "arbitration_burst": 0, 00:16:18.166 "low_priority_weight": 0, 00:16:18.166 "medium_priority_weight": 0, 00:16:18.166 "high_priority_weight": 0, 00:16:18.166 "nvme_adminq_poll_period_us": 10000, 00:16:18.166 "nvme_ioq_poll_period_us": 0, 00:16:18.166 "io_queue_requests": 0, 00:16:18.166 "delay_cmd_submit": true, 00:16:18.166 "transport_retry_count": 4, 00:16:18.166 "bdev_retry_count": 3, 00:16:18.166 "transport_ack_timeout": 0, 00:16:18.166 "ctrlr_loss_timeout_sec": 0, 00:16:18.166 "reconnect_delay_sec": 0, 00:16:18.166 "fast_io_fail_timeout_sec": 0, 00:16:18.166 "disable_auto_failback": false, 00:16:18.166 "generate_uuids": false, 00:16:18.166 "transport_tos": 0, 00:16:18.166 "nvme_error_stat": false, 00:16:18.166 "rdma_srq_size": 0, 00:16:18.166 "io_path_stat": false, 00:16:18.166 "allow_accel_sequence": false, 00:16:18.166 "rdma_max_cq_size": 0, 00:16:18.166 "rdma_cm_event_timeout_ms": 0, 00:16:18.166 "dhchap_digests": [ 00:16:18.166 "sha256", 00:16:18.166 "sha384", 00:16:18.166 "sha512" 00:16:18.166 ], 00:16:18.166 "dhchap_dhgroups": [ 00:16:18.166 "null", 00:16:18.166 "ffdhe2048", 00:16:18.166 "ffdhe3072", 00:16:18.166 "ffdhe4096", 00:16:18.166 "ffdhe6144", 00:16:18.166 "ffdhe8192" 00:16:18.166 ] 00:16:18.166 } 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "method": "bdev_nvme_set_hotplug", 00:16:18.166 "params": { 00:16:18.166 "period_us": 100000, 00:16:18.166 "enable": false 00:16:18.166 } 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "method": "bdev_malloc_create", 00:16:18.166 "params": { 00:16:18.166 "name": "malloc0", 00:16:18.166 "num_blocks": 8192, 00:16:18.166 "block_size": 4096, 00:16:18.166 "physical_block_size": 4096, 00:16:18.166 "uuid": "1c77c27a-440a-42d0-be9f-759de4723168", 00:16:18.166 "optimal_io_boundary": 0, 00:16:18.166 "md_size": 0, 00:16:18.166 "dif_type": 0, 00:16:18.166 "dif_is_head_of_md": false, 00:16:18.166 "dif_pi_format": 0 00:16:18.166 } 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "method": "bdev_wait_for_examine" 00:16:18.166 } 00:16:18.166 ] 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "subsystem": "scsi", 00:16:18.166 "config": null 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "subsystem": "scheduler", 00:16:18.166 "config": [ 00:16:18.166 { 00:16:18.166 "method": "framework_set_scheduler", 00:16:18.166 "params": { 00:16:18.166 "name": "static" 00:16:18.166 } 00:16:18.166 } 00:16:18.166 ] 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "subsystem": "vhost_scsi", 00:16:18.166 "config": [] 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "subsystem": "vhost_blk", 00:16:18.166 "config": [] 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "subsystem": "ublk", 00:16:18.166 "config": [ 00:16:18.166 { 00:16:18.166 "method": "ublk_create_target", 00:16:18.166 "params": { 00:16:18.166 "cpumask": "1" 00:16:18.166 } 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "method": "ublk_start_disk", 00:16:18.166 "params": { 00:16:18.166 "bdev_name": "malloc0", 00:16:18.166 "ublk_id": 0, 00:16:18.166 "num_queues": 1, 00:16:18.166 "queue_depth": 128 00:16:18.166 } 00:16:18.166 } 00:16:18.166 ] 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "subsystem": "nbd", 00:16:18.166 "config": [] 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "subsystem": "nvmf", 00:16:18.166 "config": [ 00:16:18.166 { 00:16:18.166 "method": "nvmf_set_config", 00:16:18.166 "params": { 00:16:18.166 "discovery_filter": "match_any", 00:16:18.166 "admin_cmd_passthru": { 00:16:18.166 "identify_ctrlr": false 00:16:18.166 }, 00:16:18.166 "dhchap_digests": [ 00:16:18.166 "sha256", 00:16:18.166 "sha384", 00:16:18.166 "sha512" 00:16:18.166 ], 00:16:18.166 "dhchap_dhgroups": [ 00:16:18.166 "null", 00:16:18.166 "ffdhe2048", 00:16:18.166 "ffdhe3072", 00:16:18.166 "ffdhe4096", 00:16:18.166 "ffdhe6144", 00:16:18.166 "ffdhe8192" 00:16:18.166 ] 00:16:18.166 } 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "method": "nvmf_set_max_subsystems", 00:16:18.166 "params": { 00:16:18.166 "max_subsystems": 1024 00:16:18.166 } 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "method": "nvmf_set_crdt", 00:16:18.166 "params": { 00:16:18.166 "crdt1": 0, 00:16:18.166 "crdt2": 0, 00:16:18.166 "crdt3": 0 00:16:18.166 } 00:16:18.166 } 00:16:18.166 ] 00:16:18.166 }, 00:16:18.166 { 00:16:18.166 "subsystem": "iscsi", 00:16:18.166 "config": [ 00:16:18.166 { 00:16:18.166 "method": "iscsi_set_options", 00:16:18.166 "params": { 00:16:18.166 "node_base": "iqn.2016-06.io.spdk", 00:16:18.166 "max_sessions": 128, 00:16:18.166 "max_connections_per_session": 2, 00:16:18.166 "max_queue_depth": 64, 00:16:18.166 "default_time2wait": 2, 00:16:18.166 "default_time2retain": 20, 00:16:18.166 "first_burst_length": 8192, 00:16:18.166 "immediate_data": true, 00:16:18.166 "allow_duplicated_isid": false, 00:16:18.166 "error_recovery_level": 0, 00:16:18.166 "nop_timeout": 60, 00:16:18.166 "nop_in_interval": 30, 00:16:18.166 "disable_chap": false, 00:16:18.166 "require_chap": false, 00:16:18.166 "mutual_chap": false, 00:16:18.166 "chap_group": 0, 00:16:18.166 "max_large_datain_per_connection": 64, 00:16:18.166 "max_r2t_per_connection": 4, 00:16:18.166 "pdu_pool_size": 36864, 00:16:18.166 "immediate_data_pool_size": 16384, 00:16:18.166 "data_out_pool_size": 2048 00:16:18.166 } 00:16:18.166 } 00:16:18.166 ] 00:16:18.166 } 00:16:18.166 ] 00:16:18.166 }' 00:16:18.166 [2024-11-26 23:50:06.137938] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:16:18.167 [2024-11-26 23:50:06.138541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84747 ] 00:16:18.167 [2024-11-26 23:50:06.285549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.428 [2024-11-26 23:50:06.325824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.687 [2024-11-26 23:50:06.797819] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:18.687 [2024-11-26 23:50:06.798209] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:18.687 [2024-11-26 23:50:06.805957] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:18.687 [2024-11-26 23:50:06.806037] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:18.687 [2024-11-26 23:50:06.806046] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:18.687 [2024-11-26 23:50:06.806058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.687 [2024-11-26 23:50:06.814923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.687 [2024-11-26 23:50:06.814945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.947 [2024-11-26 23:50:06.821826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.947 [2024-11-26 23:50:06.821936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:18.947 [2024-11-26 23:50:06.838837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 84747 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84747 ']' 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84747 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84747 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.947 killing process with pid 84747 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84747' 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84747 00:16:18.947 23:50:06 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84747 00:16:19.209 [2024-11-26 23:50:07.296807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.209 [2024-11-26 23:50:07.325847] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.209 [2024-11-26 23:50:07.326005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.209 [2024-11-26 23:50:07.333829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.209 [2024-11-26 23:50:07.333910] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:19.209 [2024-11-26 23:50:07.333925] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:19.209 [2024-11-26 23:50:07.333959] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:19.209 [2024-11-26 23:50:07.334121] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:20.153 23:50:07 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:20.153 00:16:20.153 real 0m4.018s 00:16:20.153 user 0m2.494s 00:16:20.153 sys 0m2.073s 00:16:20.153 23:50:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.153 ************************************ 00:16:20.153 END TEST test_save_ublk_config 00:16:20.153 ************************************ 00:16:20.153 23:50:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:20.153 23:50:07 ublk -- ublk/ublk.sh@139 -- # spdk_pid=84798 00:16:20.153 23:50:07 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:20.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.153 23:50:07 ublk -- ublk/ublk.sh@141 -- # waitforlisten 84798 00:16:20.153 23:50:07 ublk -- common/autotest_common.sh@835 -- # '[' -z 84798 ']' 00:16:20.153 23:50:07 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.153 23:50:07 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:20.153 23:50:07 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.153 23:50:07 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.153 23:50:07 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.153 23:50:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.153 [2024-11-26 23:50:08.064972] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:16:20.153 [2024-11-26 23:50:08.065100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84798 ] 00:16:20.153 [2024-11-26 23:50:08.212820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:20.153 [2024-11-26 23:50:08.254849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.153 [2024-11-26 23:50:08.254851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.108 23:50:08 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.108 23:50:08 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:21.108 23:50:08 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:21.108 23:50:08 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:21.108 23:50:08 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.108 23:50:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:21.108 ************************************ 00:16:21.108 START TEST test_create_ublk 00:16:21.108 ************************************ 00:16:21.108 23:50:08 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:21.108 23:50:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:21.108 23:50:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.108 23:50:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:21.108 [2024-11-26 23:50:08.943820] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:21.108 [2024-11-26 23:50:08.945989] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:21.108 23:50:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.108 23:50:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:21.108 23:50:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:21.108 23:50:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.108 23:50:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:21.108 23:50:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.108 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:21.108 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:21.108 23:50:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.108 23:50:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:21.108 [2024-11-26 23:50:09.061014] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:21.108 [2024-11-26 23:50:09.061528] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:21.108 [2024-11-26 23:50:09.061556] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:21.108 [2024-11-26 23:50:09.061568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:21.108 [2024-11-26 23:50:09.068855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:21.108 [2024-11-26 23:50:09.068893] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:21.108 [2024-11-26 23:50:09.076824] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:21.108 [2024-11-26 23:50:09.077615] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:21.108 [2024-11-26 23:50:09.100818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:21.108 23:50:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.108 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:21.108 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:21.108 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:21.108 23:50:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.108 23:50:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:21.108 23:50:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.108 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:21.108 { 00:16:21.108 "ublk_device": "/dev/ublkb0", 00:16:21.108 "id": 0, 00:16:21.108 "queue_depth": 512, 00:16:21.108 "num_queues": 4, 00:16:21.108 "bdev_name": "Malloc0" 00:16:21.108 } 00:16:21.108 ]' 00:16:21.108 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:21.109 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:21.109 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:21.109 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:21.109 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:21.109 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:21.109 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:21.369 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:21.369 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:21.369 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:21.369 23:50:09 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:21.369 23:50:09 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:21.369 fio: verification read phase will never start because write phase uses all of runtime 00:16:21.369 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:21.369 fio-3.35 00:16:21.369 Starting 1 process 00:16:33.567 00:16:33.567 fio_test: (groupid=0, jobs=1): err= 0: pid=84842: Tue Nov 26 23:50:19 2024 00:16:33.567 write: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(551MiB/10001msec); 0 zone resets 00:16:33.567 clat (usec): min=43, max=3924, avg=70.19, stdev=93.81 00:16:33.567 lat (usec): min=44, max=3924, avg=70.60, stdev=93.83 00:16:33.567 clat percentiles (usec): 00:16:33.567 | 1.00th=[ 52], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 60], 00:16:33.567 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 68], 00:16:33.567 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 76], 95.00th=[ 81], 00:16:33.567 | 99.00th=[ 125], 99.50th=[ 147], 99.90th=[ 1893], 99.95th=[ 2704], 00:16:33.567 | 99.99th=[ 3425] 00:16:33.567 bw ( KiB/s): min=44271, max=59992, per=99.98%, avg=56393.63, stdev=4129.33, samples=19 00:16:33.567 iops : min=11067, max=14998, avg=14098.37, stdev=1032.45, samples=19 00:16:33.567 lat (usec) : 50=0.13%, 100=97.65%, 250=1.96%, 500=0.09%, 750=0.01% 00:16:33.567 lat (usec) : 1000=0.01% 00:16:33.567 lat (msec) : 2=0.06%, 4=0.10% 00:16:33.567 cpu : usr=2.05%, sys=12.53%, ctx=141020, majf=0, minf=796 00:16:33.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.567 issued rwts: total=0,141021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.567 00:16:33.567 Run status group 0 (all jobs): 00:16:33.567 WRITE: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=551MiB (578MB), run=10001-10001msec 00:16:33.567 00:16:33.567 Disk stats (read/write): 00:16:33.567 ublkb0: ios=0/139660, merge=0/0, ticks=0/8285, in_queue=8286, util=99.10% 00:16:33.567 23:50:19 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:33.567 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.567 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.567 [2024-11-26 23:50:19.532216] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:33.567 [2024-11-26 23:50:19.569381] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:33.567 [2024-11-26 23:50:19.570252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:33.567 [2024-11-26 23:50:19.577852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:33.567 [2024-11-26 23:50:19.578104] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:33.567 [2024-11-26 23:50:19.578124] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:33.567 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.567 23:50:19 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:33.567 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 [2024-11-26 23:50:19.592901] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:33.568 request: 00:16:33.568 { 00:16:33.568 "ublk_id": 0, 00:16:33.568 "method": "ublk_stop_disk", 00:16:33.568 "req_id": 1 00:16:33.568 } 00:16:33.568 Got JSON-RPC error response 00:16:33.568 response: 00:16:33.568 { 00:16:33.568 "code": -19, 00:16:33.568 "message": "No such device" 00:16:33.568 } 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.568 23:50:19 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 [2024-11-26 23:50:19.608878] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:33.568 [2024-11-26 23:50:19.610769] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:33.568 [2024-11-26 23:50:19.610815] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:19 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:33.568 23:50:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:33.568 23:50:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:33.568 23:50:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:33.568 23:50:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:33.568 23:50:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:33.568 23:50:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:33.568 00:16:33.568 real 0m10.851s 00:16:33.568 user 0m0.506s 00:16:33.568 sys 0m1.341s 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 ************************************ 00:16:33.568 END TEST test_create_ublk 00:16:33.568 ************************************ 00:16:33.568 23:50:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:33.568 23:50:19 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:33.568 23:50:19 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.568 23:50:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 ************************************ 00:16:33.568 START TEST test_create_multi_ublk 00:16:33.568 ************************************ 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 [2024-11-26 23:50:19.835809] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:33.568 [2024-11-26 23:50:19.836917] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 [2024-11-26 23:50:19.919946] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:33.568 [2024-11-26 23:50:19.920256] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:33.568 [2024-11-26 23:50:19.920270] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:33.568 [2024-11-26 23:50:19.920276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:33.568 [2024-11-26 23:50:19.939821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:33.568 [2024-11-26 23:50:19.939842] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:33.568 [2024-11-26 23:50:19.951814] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:33.568 [2024-11-26 23:50:19.952325] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:33.568 [2024-11-26 23:50:19.985817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.568 23:50:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 [2024-11-26 23:50:20.089913] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:33.568 [2024-11-26 23:50:20.090227] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:33.568 [2024-11-26 23:50:20.090238] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:33.568 [2024-11-26 23:50:20.090246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:33.568 [2024-11-26 23:50:20.101820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:33.568 [2024-11-26 23:50:20.101839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:33.568 [2024-11-26 23:50:20.113816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:33.568 [2024-11-26 23:50:20.114325] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:33.568 [2024-11-26 23:50:20.142833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.568 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.568 [2024-11-26 23:50:20.249929] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:33.568 [2024-11-26 23:50:20.250246] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:33.568 [2024-11-26 23:50:20.250260] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:33.568 [2024-11-26 23:50:20.250265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:33.568 [2024-11-26 23:50:20.261822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:33.568 [2024-11-26 23:50:20.261840] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:33.568 [2024-11-26 23:50:20.273812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:33.568 [2024-11-26 23:50:20.274331] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:33.569 [2024-11-26 23:50:20.278536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.569 [2024-11-26 23:50:20.385905] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:33.569 [2024-11-26 23:50:20.386218] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:33.569 [2024-11-26 23:50:20.386230] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:33.569 [2024-11-26 23:50:20.386237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:33.569 [2024-11-26 23:50:20.397832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:33.569 [2024-11-26 23:50:20.397853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:33.569 [2024-11-26 23:50:20.409812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:33.569 [2024-11-26 23:50:20.410327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:33.569 [2024-11-26 23:50:20.438820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:33.569 { 00:16:33.569 "ublk_device": "/dev/ublkb0", 00:16:33.569 "id": 0, 00:16:33.569 "queue_depth": 512, 00:16:33.569 "num_queues": 4, 00:16:33.569 "bdev_name": "Malloc0" 00:16:33.569 }, 00:16:33.569 { 00:16:33.569 "ublk_device": "/dev/ublkb1", 00:16:33.569 "id": 1, 00:16:33.569 "queue_depth": 512, 00:16:33.569 "num_queues": 4, 00:16:33.569 "bdev_name": "Malloc1" 00:16:33.569 }, 00:16:33.569 { 00:16:33.569 "ublk_device": "/dev/ublkb2", 00:16:33.569 "id": 2, 00:16:33.569 "queue_depth": 512, 00:16:33.569 "num_queues": 4, 00:16:33.569 "bdev_name": "Malloc2" 00:16:33.569 }, 00:16:33.569 { 00:16:33.569 "ublk_device": "/dev/ublkb3", 00:16:33.569 "id": 3, 00:16:33.569 "queue_depth": 512, 00:16:33.569 "num_queues": 4, 00:16:33.569 "bdev_name": "Malloc3" 00:16:33.569 } 00:16:33.569 ]' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:33.569 23:50:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.569 [2024-11-26 23:50:21.137877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:33.569 [2024-11-26 23:50:21.176382] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:33.569 [2024-11-26 23:50:21.177461] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:33.569 [2024-11-26 23:50:21.185811] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:33.569 [2024-11-26 23:50:21.186056] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:33.569 [2024-11-26 23:50:21.186067] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.569 [2024-11-26 23:50:21.200890] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:33.569 [2024-11-26 23:50:21.238365] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:33.569 [2024-11-26 23:50:21.239443] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:33.569 [2024-11-26 23:50:21.244818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:33.569 [2024-11-26 23:50:21.245051] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:33.569 [2024-11-26 23:50:21.245060] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.569 [2024-11-26 23:50:21.260874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:33.569 [2024-11-26 23:50:21.291375] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:33.569 [2024-11-26 23:50:21.292357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:33.569 [2024-11-26 23:50:21.300816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:33.569 [2024-11-26 23:50:21.301051] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:33.569 [2024-11-26 23:50:21.301061] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.569 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.570 [2024-11-26 23:50:21.316874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:33.570 [2024-11-26 23:50:21.350837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:33.570 [2024-11-26 23:50:21.351464] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:33.570 [2024-11-26 23:50:21.359822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:33.570 [2024-11-26 23:50:21.360063] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:33.570 [2024-11-26 23:50:21.360074] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:33.570 [2024-11-26 23:50:21.543867] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:33.570 [2024-11-26 23:50:21.545498] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:33.570 [2024-11-26 23:50:21.545526] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.570 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:33.830 23:50:21 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:34.107 23:50:21 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:34.107 00:16:34.107 real 0m2.142s 00:16:34.107 user 0m0.819s 00:16:34.107 sys 0m0.151s 00:16:34.107 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.107 23:50:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.107 ************************************ 00:16:34.107 END TEST test_create_multi_ublk 00:16:34.107 ************************************ 00:16:34.107 23:50:21 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:34.107 23:50:21 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:34.107 23:50:21 ublk -- ublk/ublk.sh@130 -- # killprocess 84798 00:16:34.107 23:50:21 ublk -- common/autotest_common.sh@954 -- # '[' -z 84798 ']' 00:16:34.107 23:50:21 ublk -- common/autotest_common.sh@958 -- # kill -0 84798 00:16:34.107 23:50:21 ublk -- common/autotest_common.sh@959 -- # uname 00:16:34.107 23:50:22 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.107 23:50:22 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84798 00:16:34.107 23:50:22 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.107 23:50:22 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.107 killing process with pid 84798 00:16:34.107 23:50:22 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84798' 00:16:34.107 23:50:22 ublk -- common/autotest_common.sh@973 -- # kill 84798 00:16:34.107 23:50:22 ublk -- common/autotest_common.sh@978 -- # wait 84798 00:16:34.398 [2024-11-26 23:50:22.245020] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:34.398 [2024-11-26 23:50:22.245100] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:34.398 ************************************ 00:16:34.398 END TEST ublk 00:16:34.398 ************************************ 00:16:34.398 00:16:34.398 real 0m18.757s 00:16:34.398 user 0m27.914s 00:16:34.398 sys 0m8.575s 00:16:34.398 23:50:22 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.398 23:50:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.659 23:50:22 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:34.659 23:50:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:34.659 23:50:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.659 23:50:22 -- common/autotest_common.sh@10 -- # set +x 00:16:34.659 ************************************ 00:16:34.659 START TEST ublk_recovery 00:16:34.659 ************************************ 00:16:34.659 23:50:22 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:34.659 * Looking for test storage... 00:16:34.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:34.659 23:50:22 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.659 23:50:22 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.659 23:50:22 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.659 23:50:22 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.659 23:50:22 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:34.659 23:50:22 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.660 --rc genhtml_branch_coverage=1 00:16:34.660 --rc genhtml_function_coverage=1 00:16:34.660 --rc genhtml_legend=1 00:16:34.660 --rc geninfo_all_blocks=1 00:16:34.660 --rc geninfo_unexecuted_blocks=1 00:16:34.660 00:16:34.660 ' 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.660 --rc genhtml_branch_coverage=1 00:16:34.660 --rc genhtml_function_coverage=1 00:16:34.660 --rc genhtml_legend=1 00:16:34.660 --rc geninfo_all_blocks=1 00:16:34.660 --rc geninfo_unexecuted_blocks=1 00:16:34.660 00:16:34.660 ' 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.660 --rc genhtml_branch_coverage=1 00:16:34.660 --rc genhtml_function_coverage=1 00:16:34.660 --rc genhtml_legend=1 00:16:34.660 --rc geninfo_all_blocks=1 00:16:34.660 --rc geninfo_unexecuted_blocks=1 00:16:34.660 00:16:34.660 ' 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.660 --rc genhtml_branch_coverage=1 00:16:34.660 --rc genhtml_function_coverage=1 00:16:34.660 --rc genhtml_legend=1 00:16:34.660 --rc geninfo_all_blocks=1 00:16:34.660 --rc geninfo_unexecuted_blocks=1 00:16:34.660 00:16:34.660 ' 00:16:34.660 23:50:22 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:34.660 23:50:22 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:34.660 23:50:22 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:34.660 23:50:22 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:34.660 23:50:22 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:34.660 23:50:22 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:34.660 23:50:22 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:34.660 23:50:22 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:34.660 23:50:22 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:34.660 23:50:22 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:34.660 23:50:22 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85164 00:16:34.660 23:50:22 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:34.660 23:50:22 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85164 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85164 ']' 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.660 23:50:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:34.660 23:50:22 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:34.660 [2024-11-26 23:50:22.782629] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:16:34.660 [2024-11-26 23:50:22.782766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85164 ] 00:16:34.921 [2024-11-26 23:50:22.926192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:34.921 [2024-11-26 23:50:22.968093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.921 [2024-11-26 23:50:22.968149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:35.492 23:50:23 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:35.492 [2024-11-26 23:50:23.577813] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:35.492 [2024-11-26 23:50:23.579128] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.492 23:50:23 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:35.492 malloc0 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.492 23:50:23 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.492 23:50:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:35.492 [2024-11-26 23:50:23.617939] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:35.492 [2024-11-26 23:50:23.618037] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:35.492 [2024-11-26 23:50:23.618046] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:35.492 [2024-11-26 23:50:23.618063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:35.753 [2024-11-26 23:50:23.626919] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:35.753 [2024-11-26 23:50:23.626947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:35.753 [2024-11-26 23:50:23.633822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:35.753 [2024-11-26 23:50:23.633973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:35.753 [2024-11-26 23:50:23.655823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:35.753 1 00:16:35.753 23:50:23 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.753 23:50:23 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:36.697 23:50:24 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85197 00:16:36.697 23:50:24 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:36.697 23:50:24 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:36.697 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:36.697 fio-3.35 00:16:36.697 Starting 1 process 00:16:41.973 23:50:29 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85164 00:16:41.973 23:50:29 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:47.261 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85164 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:47.261 23:50:34 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85308 00:16:47.261 23:50:34 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.261 23:50:34 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85308 00:16:47.261 23:50:34 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85308 ']' 00:16:47.261 23:50:34 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.261 23:50:34 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.261 23:50:34 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.261 23:50:34 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.261 23:50:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.261 23:50:34 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:47.261 [2024-11-26 23:50:34.755548] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:16:47.261 [2024-11-26 23:50:34.755674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85308 ] 00:16:47.261 [2024-11-26 23:50:34.900763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:47.261 [2024-11-26 23:50:34.943429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.261 [2024-11-26 23:50:34.943525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.523 23:50:35 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.523 23:50:35 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:47.523 23:50:35 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:47.523 23:50:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.523 23:50:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.523 [2024-11-26 23:50:35.598827] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.523 [2024-11-26 23:50:35.601040] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.523 23:50:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.523 23:50:35 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:47.523 23:50:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.523 23:50:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.784 malloc0 00:16:47.784 23:50:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.784 23:50:35 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:47.784 23:50:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.784 23:50:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.784 [2024-11-26 23:50:35.662991] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:47.784 [2024-11-26 23:50:35.663057] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:47.784 [2024-11-26 23:50:35.663067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:47.784 [2024-11-26 23:50:35.670880] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:47.784 [2024-11-26 23:50:35.670907] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:47.784 [2024-11-26 23:50:35.670924] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:47.784 [2024-11-26 23:50:35.671012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:47.784 1 00:16:47.784 23:50:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.784 23:50:35 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85197 00:16:47.784 [2024-11-26 23:50:35.678836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:47.784 [2024-11-26 23:50:35.686702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:47.784 [2024-11-26 23:50:35.694130] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:47.784 [2024-11-26 23:50:35.694158] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:44.009 00:17:44.009 fio_test: (groupid=0, jobs=1): err= 0: pid=85200: Tue Nov 26 23:51:24 2024 00:17:44.009 read: IOPS=24.4k, BW=95.2MiB/s (99.8MB/s)(5709MiB/60002msec) 00:17:44.009 slat (nsec): min=1193, max=3236.5k, avg=5439.63, stdev=3805.18 00:17:44.009 clat (usec): min=1339, max=6032.6k, avg=2599.08, stdev=41426.51 00:17:44.009 lat (usec): min=1345, max=6032.6k, avg=2604.52, stdev=41426.50 00:17:44.009 clat percentiles (usec): 00:17:44.009 | 1.00th=[ 1827], 5.00th=[ 2073], 10.00th=[ 2114], 20.00th=[ 2147], 00:17:44.009 | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2180], 60.00th=[ 2212], 00:17:44.009 | 70.00th=[ 2212], 80.00th=[ 2245], 90.00th=[ 2442], 95.00th=[ 3294], 00:17:44.009 | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 7177], 99.95th=[ 8029], 00:17:44.009 | 99.99th=[13042] 00:17:44.009 bw ( KiB/s): min=15752, max=122408, per=100.00%, avg=107335.26, stdev=12857.25, samples=108 00:17:44.010 iops : min= 3938, max=30602, avg=26833.81, stdev=3214.31, samples=108 00:17:44.010 write: IOPS=24.3k, BW=95.1MiB/s (99.7MB/s)(5703MiB/60002msec); 0 zone resets 00:17:44.010 slat (nsec): min=1203, max=550455, avg=5673.66, stdev=1732.54 00:17:44.010 clat (usec): min=1383, max=6032.7k, avg=2646.22, stdev=38329.28 00:17:44.010 lat (usec): min=1388, max=6032.7k, avg=2651.89, stdev=38329.28 00:17:44.010 clat percentiles (usec): 00:17:44.010 | 1.00th=[ 1893], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2245], 00:17:44.010 | 30.00th=[ 2245], 40.00th=[ 2278], 50.00th=[ 2278], 60.00th=[ 2311], 00:17:44.010 | 70.00th=[ 2311], 80.00th=[ 2343], 90.00th=[ 2474], 95.00th=[ 3261], 00:17:44.010 | 99.00th=[ 5014], 99.50th=[ 5800], 99.90th=[ 7242], 99.95th=[ 8094], 00:17:44.010 | 99.99th=[12780] 00:17:44.010 bw ( KiB/s): min=16104, max=123472, per=100.00%, avg=107217.48, stdev=12772.22, samples=108 00:17:44.010 iops : min= 4026, max=30868, avg=26804.37, stdev=3193.06, samples=108 00:17:44.010 lat (msec) : 2=2.38%, 4=94.94%, 10=2.66%, 20=0.01%, >=2000=0.01% 00:17:44.010 cpu : usr=5.43%, sys=27.89%, ctx=95454, majf=0, minf=13 00:17:44.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:44.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:44.010 issued rwts: total=1461599,1460094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:44.010 00:17:44.010 Run status group 0 (all jobs): 00:17:44.010 READ: bw=95.2MiB/s (99.8MB/s), 95.2MiB/s-95.2MiB/s (99.8MB/s-99.8MB/s), io=5709MiB (5987MB), run=60002-60002msec 00:17:44.010 WRITE: bw=95.1MiB/s (99.7MB/s), 95.1MiB/s-95.1MiB/s (99.7MB/s-99.7MB/s), io=5703MiB (5981MB), run=60002-60002msec 00:17:44.010 00:17:44.010 Disk stats (read/write): 00:17:44.010 ublkb1: ios=1458624/1457079, merge=0/0, ticks=3710281/3651849, in_queue=7362131, util=99.89% 00:17:44.010 23:51:24 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:44.010 23:51:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.010 23:51:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.010 [2024-11-26 23:51:24.924068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:44.010 [2024-11-26 23:51:24.967840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:44.010 [2024-11-26 23:51:24.968009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:44.010 [2024-11-26 23:51:24.975830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:44.010 [2024-11-26 23:51:24.975936] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:44.010 [2024-11-26 23:51:24.975943] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:44.010 23:51:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.010 23:51:24 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:44.010 23:51:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.010 23:51:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.010 [2024-11-26 23:51:24.991900] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:44.010 [2024-11-26 23:51:24.993585] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:44.010 [2024-11-26 23:51:24.993619] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:44.010 23:51:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.010 23:51:24 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:44.010 23:51:24 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:44.010 23:51:24 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85308 00:17:44.010 23:51:24 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85308 ']' 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85308 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85308 00:17:44.010 killing process with pid 85308 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85308' 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85308 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85308 00:17:44.010 [2024-11-26 23:51:25.191184] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:44.010 [2024-11-26 23:51:25.191241] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:44.010 00:17:44.010 real 1m2.929s 00:17:44.010 user 1m36.328s 00:17:44.010 sys 0m38.838s 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.010 ************************************ 00:17:44.010 END TEST ublk_recovery 00:17:44.010 ************************************ 00:17:44.010 23:51:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.010 23:51:25 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:44.010 23:51:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:44.010 23:51:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.010 23:51:25 -- common/autotest_common.sh@10 -- # set +x 00:17:44.010 23:51:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:44.010 23:51:25 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:44.010 23:51:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:44.010 23:51:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.010 23:51:25 -- common/autotest_common.sh@10 -- # set +x 00:17:44.010 ************************************ 00:17:44.010 START TEST ftl 00:17:44.010 ************************************ 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:44.010 * Looking for test storage... 00:17:44.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.010 23:51:25 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.010 23:51:25 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.010 23:51:25 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.010 23:51:25 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.010 23:51:25 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.010 23:51:25 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.010 23:51:25 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.010 23:51:25 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.010 23:51:25 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.010 23:51:25 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.010 23:51:25 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.010 23:51:25 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:44.010 23:51:25 ftl -- scripts/common.sh@345 -- # : 1 00:17:44.010 23:51:25 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.010 23:51:25 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.010 23:51:25 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:44.010 23:51:25 ftl -- scripts/common.sh@353 -- # local d=1 00:17:44.010 23:51:25 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.010 23:51:25 ftl -- scripts/common.sh@355 -- # echo 1 00:17:44.010 23:51:25 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.010 23:51:25 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:44.010 23:51:25 ftl -- scripts/common.sh@353 -- # local d=2 00:17:44.010 23:51:25 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.010 23:51:25 ftl -- scripts/common.sh@355 -- # echo 2 00:17:44.010 23:51:25 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.010 23:51:25 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.010 23:51:25 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.010 23:51:25 ftl -- scripts/common.sh@368 -- # return 0 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.010 --rc genhtml_branch_coverage=1 00:17:44.010 --rc genhtml_function_coverage=1 00:17:44.010 --rc genhtml_legend=1 00:17:44.010 --rc geninfo_all_blocks=1 00:17:44.010 --rc geninfo_unexecuted_blocks=1 00:17:44.010 00:17:44.010 ' 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.010 --rc genhtml_branch_coverage=1 00:17:44.010 --rc genhtml_function_coverage=1 00:17:44.010 --rc genhtml_legend=1 00:17:44.010 --rc geninfo_all_blocks=1 00:17:44.010 --rc geninfo_unexecuted_blocks=1 00:17:44.010 00:17:44.010 ' 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.010 --rc genhtml_branch_coverage=1 00:17:44.010 --rc genhtml_function_coverage=1 00:17:44.010 --rc genhtml_legend=1 00:17:44.010 --rc geninfo_all_blocks=1 00:17:44.010 --rc geninfo_unexecuted_blocks=1 00:17:44.010 00:17:44.010 ' 00:17:44.010 23:51:25 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.010 --rc genhtml_branch_coverage=1 00:17:44.010 --rc genhtml_function_coverage=1 00:17:44.010 --rc genhtml_legend=1 00:17:44.010 --rc geninfo_all_blocks=1 00:17:44.010 --rc geninfo_unexecuted_blocks=1 00:17:44.010 00:17:44.010 ' 00:17:44.011 23:51:25 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:44.011 23:51:25 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:44.011 23:51:25 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.011 23:51:25 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.011 23:51:25 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:44.011 23:51:25 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:44.011 23:51:25 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.011 23:51:25 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:44.011 23:51:25 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:44.011 23:51:25 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.011 23:51:25 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.011 23:51:25 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:44.011 23:51:25 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:44.011 23:51:25 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:44.011 23:51:25 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:44.011 23:51:25 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:44.011 23:51:25 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:44.011 23:51:25 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.011 23:51:25 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.011 23:51:25 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:44.011 23:51:25 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:44.011 23:51:25 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:44.011 23:51:25 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:44.011 23:51:25 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:44.011 23:51:25 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:44.011 23:51:25 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:44.011 23:51:25 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:44.011 23:51:25 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:44.011 23:51:25 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:44.011 23:51:25 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.011 23:51:25 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:44.011 23:51:25 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:44.011 23:51:25 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:44.011 23:51:25 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:44.011 23:51:25 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:44.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:44.011 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:44.011 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:44.011 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:44.011 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:44.011 23:51:26 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=86106 00:17:44.011 23:51:26 ftl -- ftl/ftl.sh@38 -- # waitforlisten 86106 00:17:44.011 23:51:26 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:44.011 23:51:26 ftl -- common/autotest_common.sh@835 -- # '[' -z 86106 ']' 00:17:44.011 23:51:26 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.011 23:51:26 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.011 23:51:26 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.011 23:51:26 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.011 23:51:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:44.011 [2024-11-26 23:51:26.286017] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:17:44.011 [2024-11-26 23:51:26.286126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86106 ] 00:17:44.011 [2024-11-26 23:51:26.428825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.011 [2024-11-26 23:51:26.452331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.011 23:51:27 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.011 23:51:27 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:44.011 23:51:27 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:44.011 23:51:27 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:44.011 23:51:27 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:44.011 23:51:27 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@50 -- # break 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@63 -- # break 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@66 -- # killprocess 86106 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@954 -- # '[' -z 86106 ']' 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@958 -- # kill -0 86106 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@959 -- # uname 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86106 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.011 killing process with pid 86106 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86106' 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@973 -- # kill 86106 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@978 -- # wait 86106 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:44.011 23:51:28 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.011 23:51:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:44.011 ************************************ 00:17:44.011 START TEST ftl_fio_basic 00:17:44.011 ************************************ 00:17:44.011 23:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:44.011 * Looking for test storage... 00:17:44.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.011 23:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.011 23:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.011 23:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.011 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.011 --rc genhtml_branch_coverage=1 00:17:44.011 --rc genhtml_function_coverage=1 00:17:44.011 --rc genhtml_legend=1 00:17:44.011 --rc geninfo_all_blocks=1 00:17:44.012 --rc geninfo_unexecuted_blocks=1 00:17:44.012 00:17:44.012 ' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.012 --rc genhtml_branch_coverage=1 00:17:44.012 --rc genhtml_function_coverage=1 00:17:44.012 --rc genhtml_legend=1 00:17:44.012 --rc geninfo_all_blocks=1 00:17:44.012 --rc geninfo_unexecuted_blocks=1 00:17:44.012 00:17:44.012 ' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.012 --rc genhtml_branch_coverage=1 00:17:44.012 --rc genhtml_function_coverage=1 00:17:44.012 --rc genhtml_legend=1 00:17:44.012 --rc geninfo_all_blocks=1 00:17:44.012 --rc geninfo_unexecuted_blocks=1 00:17:44.012 00:17:44.012 ' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.012 --rc genhtml_branch_coverage=1 00:17:44.012 --rc genhtml_function_coverage=1 00:17:44.012 --rc genhtml_legend=1 00:17:44.012 --rc geninfo_all_blocks=1 00:17:44.012 --rc geninfo_unexecuted_blocks=1 00:17:44.012 00:17:44.012 ' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:44.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86222 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86222 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86222 ']' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:44.012 [2024-11-26 23:51:29.161321] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:17:44.012 [2024-11-26 23:51:29.161483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86222 ] 00:17:44.012 [2024-11-26 23:51:29.303845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:44.012 [2024-11-26 23:51:29.334968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.012 [2024-11-26 23:51:29.335337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.012 [2024-11-26 23:51:29.335382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.012 23:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:44.012 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:44.012 { 00:17:44.012 "name": "nvme0n1", 00:17:44.012 "aliases": [ 00:17:44.012 "0eaa4fb9-bf0a-46b3-a1f5-376aead0414b" 00:17:44.012 ], 00:17:44.012 "product_name": "NVMe disk", 00:17:44.012 "block_size": 4096, 00:17:44.012 "num_blocks": 1310720, 00:17:44.012 "uuid": "0eaa4fb9-bf0a-46b3-a1f5-376aead0414b", 00:17:44.012 "numa_id": -1, 00:17:44.012 "assigned_rate_limits": { 00:17:44.012 "rw_ios_per_sec": 0, 00:17:44.012 "rw_mbytes_per_sec": 0, 00:17:44.012 "r_mbytes_per_sec": 0, 00:17:44.012 "w_mbytes_per_sec": 0 00:17:44.012 }, 00:17:44.012 "claimed": false, 00:17:44.012 "zoned": false, 00:17:44.012 "supported_io_types": { 00:17:44.012 "read": true, 00:17:44.012 "write": true, 00:17:44.012 "unmap": true, 00:17:44.012 "flush": true, 00:17:44.012 "reset": true, 00:17:44.012 "nvme_admin": true, 00:17:44.012 "nvme_io": true, 00:17:44.012 "nvme_io_md": false, 00:17:44.012 "write_zeroes": true, 00:17:44.012 "zcopy": false, 00:17:44.012 "get_zone_info": false, 00:17:44.012 "zone_management": false, 00:17:44.012 "zone_append": false, 00:17:44.012 "compare": true, 00:17:44.012 "compare_and_write": false, 00:17:44.012 "abort": true, 00:17:44.012 "seek_hole": false, 00:17:44.012 "seek_data": false, 00:17:44.012 "copy": true, 00:17:44.012 "nvme_iov_md": false 00:17:44.012 }, 00:17:44.012 "driver_specific": { 00:17:44.012 "nvme": [ 00:17:44.012 { 00:17:44.012 "pci_address": "0000:00:11.0", 00:17:44.012 "trid": { 00:17:44.012 "trtype": "PCIe", 00:17:44.012 "traddr": "0000:00:11.0" 00:17:44.012 }, 00:17:44.012 "ctrlr_data": { 00:17:44.012 "cntlid": 0, 00:17:44.012 "vendor_id": "0x1b36", 00:17:44.013 "model_number": "QEMU NVMe Ctrl", 00:17:44.013 "serial_number": "12341", 00:17:44.013 "firmware_revision": "8.0.0", 00:17:44.013 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:44.013 "oacs": { 00:17:44.013 "security": 0, 00:17:44.013 "format": 1, 00:17:44.013 "firmware": 0, 00:17:44.013 "ns_manage": 1 00:17:44.013 }, 00:17:44.013 "multi_ctrlr": false, 00:17:44.013 "ana_reporting": false 00:17:44.013 }, 00:17:44.013 "vs": { 00:17:44.013 "nvme_version": "1.4" 00:17:44.013 }, 00:17:44.013 "ns_data": { 00:17:44.013 "id": 1, 00:17:44.013 "can_share": false 00:17:44.013 } 00:17:44.013 } 00:17:44.013 ], 00:17:44.013 "mp_policy": "active_passive" 00:17:44.013 } 00:17:44.013 } 00:17:44.013 ]' 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:44.013 23:51:30 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=775ebe16-78b4-4d6b-aaa6-11b14ca20de9 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 775ebe16-78b4-4d6b-aaa6-11b14ca20de9 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:44.013 { 00:17:44.013 "name": "299311c8-2edd-47b4-8078-84b3a6c67ecc", 00:17:44.013 "aliases": [ 00:17:44.013 "lvs/nvme0n1p0" 00:17:44.013 ], 00:17:44.013 "product_name": "Logical Volume", 00:17:44.013 "block_size": 4096, 00:17:44.013 "num_blocks": 26476544, 00:17:44.013 "uuid": "299311c8-2edd-47b4-8078-84b3a6c67ecc", 00:17:44.013 "assigned_rate_limits": { 00:17:44.013 "rw_ios_per_sec": 0, 00:17:44.013 "rw_mbytes_per_sec": 0, 00:17:44.013 "r_mbytes_per_sec": 0, 00:17:44.013 "w_mbytes_per_sec": 0 00:17:44.013 }, 00:17:44.013 "claimed": false, 00:17:44.013 "zoned": false, 00:17:44.013 "supported_io_types": { 00:17:44.013 "read": true, 00:17:44.013 "write": true, 00:17:44.013 "unmap": true, 00:17:44.013 "flush": false, 00:17:44.013 "reset": true, 00:17:44.013 "nvme_admin": false, 00:17:44.013 "nvme_io": false, 00:17:44.013 "nvme_io_md": false, 00:17:44.013 "write_zeroes": true, 00:17:44.013 "zcopy": false, 00:17:44.013 "get_zone_info": false, 00:17:44.013 "zone_management": false, 00:17:44.013 "zone_append": false, 00:17:44.013 "compare": false, 00:17:44.013 "compare_and_write": false, 00:17:44.013 "abort": false, 00:17:44.013 "seek_hole": true, 00:17:44.013 "seek_data": true, 00:17:44.013 "copy": false, 00:17:44.013 "nvme_iov_md": false 00:17:44.013 }, 00:17:44.013 "driver_specific": { 00:17:44.013 "lvol": { 00:17:44.013 "lvol_store_uuid": "775ebe16-78b4-4d6b-aaa6-11b14ca20de9", 00:17:44.013 "base_bdev": "nvme0n1", 00:17:44.013 "thin_provision": true, 00:17:44.013 "num_allocated_clusters": 0, 00:17:44.013 "snapshot": false, 00:17:44.013 "clone": false, 00:17:44.013 "esnap_clone": false 00:17:44.013 } 00:17:44.013 } 00:17:44.013 } 00:17:44.013 ]' 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:44.013 { 00:17:44.013 "name": "299311c8-2edd-47b4-8078-84b3a6c67ecc", 00:17:44.013 "aliases": [ 00:17:44.013 "lvs/nvme0n1p0" 00:17:44.013 ], 00:17:44.013 "product_name": "Logical Volume", 00:17:44.013 "block_size": 4096, 00:17:44.013 "num_blocks": 26476544, 00:17:44.013 "uuid": "299311c8-2edd-47b4-8078-84b3a6c67ecc", 00:17:44.013 "assigned_rate_limits": { 00:17:44.013 "rw_ios_per_sec": 0, 00:17:44.013 "rw_mbytes_per_sec": 0, 00:17:44.013 "r_mbytes_per_sec": 0, 00:17:44.013 "w_mbytes_per_sec": 0 00:17:44.013 }, 00:17:44.013 "claimed": false, 00:17:44.013 "zoned": false, 00:17:44.013 "supported_io_types": { 00:17:44.013 "read": true, 00:17:44.013 "write": true, 00:17:44.013 "unmap": true, 00:17:44.013 "flush": false, 00:17:44.013 "reset": true, 00:17:44.013 "nvme_admin": false, 00:17:44.013 "nvme_io": false, 00:17:44.013 "nvme_io_md": false, 00:17:44.013 "write_zeroes": true, 00:17:44.013 "zcopy": false, 00:17:44.013 "get_zone_info": false, 00:17:44.013 "zone_management": false, 00:17:44.013 "zone_append": false, 00:17:44.013 "compare": false, 00:17:44.013 "compare_and_write": false, 00:17:44.013 "abort": false, 00:17:44.013 "seek_hole": true, 00:17:44.013 "seek_data": true, 00:17:44.013 "copy": false, 00:17:44.013 "nvme_iov_md": false 00:17:44.013 }, 00:17:44.013 "driver_specific": { 00:17:44.013 "lvol": { 00:17:44.013 "lvol_store_uuid": "775ebe16-78b4-4d6b-aaa6-11b14ca20de9", 00:17:44.013 "base_bdev": "nvme0n1", 00:17:44.013 "thin_provision": true, 00:17:44.013 "num_allocated_clusters": 0, 00:17:44.013 "snapshot": false, 00:17:44.013 "clone": false, 00:17:44.013 "esnap_clone": false 00:17:44.013 } 00:17:44.013 } 00:17:44.013 } 00:17:44.013 ]' 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:44.013 23:51:31 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:44.013 23:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:44.013 23:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:44.013 23:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:44.013 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:44.013 23:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.013 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:44.014 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:44.014 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:44.014 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 299311c8-2edd-47b4-8078-84b3a6c67ecc 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:44.275 { 00:17:44.275 "name": "299311c8-2edd-47b4-8078-84b3a6c67ecc", 00:17:44.275 "aliases": [ 00:17:44.275 "lvs/nvme0n1p0" 00:17:44.275 ], 00:17:44.275 "product_name": "Logical Volume", 00:17:44.275 "block_size": 4096, 00:17:44.275 "num_blocks": 26476544, 00:17:44.275 "uuid": "299311c8-2edd-47b4-8078-84b3a6c67ecc", 00:17:44.275 "assigned_rate_limits": { 00:17:44.275 "rw_ios_per_sec": 0, 00:17:44.275 "rw_mbytes_per_sec": 0, 00:17:44.275 "r_mbytes_per_sec": 0, 00:17:44.275 "w_mbytes_per_sec": 0 00:17:44.275 }, 00:17:44.275 "claimed": false, 00:17:44.275 "zoned": false, 00:17:44.275 "supported_io_types": { 00:17:44.275 "read": true, 00:17:44.275 "write": true, 00:17:44.275 "unmap": true, 00:17:44.275 "flush": false, 00:17:44.275 "reset": true, 00:17:44.275 "nvme_admin": false, 00:17:44.275 "nvme_io": false, 00:17:44.275 "nvme_io_md": false, 00:17:44.275 "write_zeroes": true, 00:17:44.275 "zcopy": false, 00:17:44.275 "get_zone_info": false, 00:17:44.275 "zone_management": false, 00:17:44.275 "zone_append": false, 00:17:44.275 "compare": false, 00:17:44.275 "compare_and_write": false, 00:17:44.275 "abort": false, 00:17:44.275 "seek_hole": true, 00:17:44.275 "seek_data": true, 00:17:44.275 "copy": false, 00:17:44.275 "nvme_iov_md": false 00:17:44.275 }, 00:17:44.275 "driver_specific": { 00:17:44.275 "lvol": { 00:17:44.275 "lvol_store_uuid": "775ebe16-78b4-4d6b-aaa6-11b14ca20de9", 00:17:44.275 "base_bdev": "nvme0n1", 00:17:44.275 "thin_provision": true, 00:17:44.275 "num_allocated_clusters": 0, 00:17:44.275 "snapshot": false, 00:17:44.275 "clone": false, 00:17:44.275 "esnap_clone": false 00:17:44.275 } 00:17:44.275 } 00:17:44.275 } 00:17:44.275 ]' 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:44.275 23:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:44.276 23:51:32 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 299311c8-2edd-47b4-8078-84b3a6c67ecc -c nvc0n1p0 --l2p_dram_limit 60 00:17:44.534 [2024-11-26 23:51:32.579454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.579491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.534 [2024-11-26 23:51:32.579503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.534 [2024-11-26 23:51:32.579512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.579558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.579567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.534 [2024-11-26 23:51:32.579574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:44.534 [2024-11-26 23:51:32.579584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.579606] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.534 [2024-11-26 23:51:32.579814] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.534 [2024-11-26 23:51:32.579841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.579849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.534 [2024-11-26 23:51:32.579855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:17:44.534 [2024-11-26 23:51:32.579863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.579895] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 40092998-5ef6-470a-a011-b75585ff485c 00:17:44.534 [2024-11-26 23:51:32.580849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.580871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:44.534 [2024-11-26 23:51:32.580883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:44.534 [2024-11-26 23:51:32.580890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.585689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.585714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.534 [2024-11-26 23:51:32.585725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:17:44.534 [2024-11-26 23:51:32.585732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.585827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.585836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.534 [2024-11-26 23:51:32.585846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:44.534 [2024-11-26 23:51:32.585857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.585906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.585918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.534 [2024-11-26 23:51:32.585926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.534 [2024-11-26 23:51:32.585931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.585959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.534 [2024-11-26 23:51:32.587177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.587202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.534 [2024-11-26 23:51:32.587210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:17:44.534 [2024-11-26 23:51:32.587228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.587272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.534 [2024-11-26 23:51:32.587281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.534 [2024-11-26 23:51:32.587287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:44.534 [2024-11-26 23:51:32.587298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.534 [2024-11-26 23:51:32.587320] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:44.534 [2024-11-26 23:51:32.587454] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.534 [2024-11-26 23:51:32.587476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.534 [2024-11-26 23:51:32.587486] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.535 [2024-11-26 23:51:32.587503] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.535 [2024-11-26 23:51:32.587512] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.535 [2024-11-26 23:51:32.587523] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.535 [2024-11-26 23:51:32.587530] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.535 [2024-11-26 23:51:32.587536] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.535 [2024-11-26 23:51:32.587543] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.535 [2024-11-26 23:51:32.587549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.535 [2024-11-26 23:51:32.587557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.535 [2024-11-26 23:51:32.587566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:17:44.535 [2024-11-26 23:51:32.587573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.535 [2024-11-26 23:51:32.587645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.535 [2024-11-26 23:51:32.587660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.535 [2024-11-26 23:51:32.587666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:44.535 [2024-11-26 23:51:32.587673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.535 [2024-11-26 23:51:32.587784] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.535 [2024-11-26 23:51:32.587811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.535 [2024-11-26 23:51:32.587818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.535 [2024-11-26 23:51:32.587826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.535 [2024-11-26 23:51:32.587833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.535 [2024-11-26 23:51:32.587841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.535 [2024-11-26 23:51:32.587848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.535 [2024-11-26 23:51:32.587856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.535 [2024-11-26 23:51:32.587862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.535 [2024-11-26 23:51:32.587870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.535 [2024-11-26 23:51:32.587876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.535 [2024-11-26 23:51:32.587884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.535 [2024-11-26 23:51:32.587891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.535 [2024-11-26 23:51:32.587900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.535 [2024-11-26 23:51:32.587910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:44.535 [2024-11-26 23:51:32.587918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.535 [2024-11-26 23:51:32.587925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.535 [2024-11-26 23:51:32.587932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:44.535 [2024-11-26 23:51:32.587938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.535 [2024-11-26 23:51:32.587952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.535 [2024-11-26 23:51:32.587959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.535 [2024-11-26 23:51:32.587966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.535 [2024-11-26 23:51:32.587972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.535 [2024-11-26 23:51:32.587981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.535 [2024-11-26 23:51:32.587989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.535 [2024-11-26 23:51:32.587997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.535 [2024-11-26 23:51:32.588003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.535 [2024-11-26 23:51:32.588011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.535 [2024-11-26 23:51:32.588018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.535 [2024-11-26 23:51:32.588027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:44.535 [2024-11-26 23:51:32.588037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.535 [2024-11-26 23:51:32.588045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.535 [2024-11-26 23:51:32.588051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:44.535 [2024-11-26 23:51:32.588058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.535 [2024-11-26 23:51:32.588064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.535 [2024-11-26 23:51:32.588073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:44.535 [2024-11-26 23:51:32.588079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.535 [2024-11-26 23:51:32.588087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.535 [2024-11-26 23:51:32.588092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:44.535 [2024-11-26 23:51:32.588099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.535 [2024-11-26 23:51:32.588105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.535 [2024-11-26 23:51:32.588113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:44.535 [2024-11-26 23:51:32.588119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.535 [2024-11-26 23:51:32.588126] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.535 [2024-11-26 23:51:32.588133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.535 [2024-11-26 23:51:32.588156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.535 [2024-11-26 23:51:32.588163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.535 [2024-11-26 23:51:32.588172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.535 [2024-11-26 23:51:32.588178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.535 [2024-11-26 23:51:32.588185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.535 [2024-11-26 23:51:32.588190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.535 [2024-11-26 23:51:32.588199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.535 [2024-11-26 23:51:32.588206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.535 [2024-11-26 23:51:32.588215] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.535 [2024-11-26 23:51:32.588225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.535 [2024-11-26 23:51:32.588235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.535 [2024-11-26 23:51:32.588241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:44.535 [2024-11-26 23:51:32.588249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:44.535 [2024-11-26 23:51:32.588255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:44.535 [2024-11-26 23:51:32.588261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:44.535 [2024-11-26 23:51:32.588267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:44.535 [2024-11-26 23:51:32.588275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:44.535 [2024-11-26 23:51:32.588280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:44.535 [2024-11-26 23:51:32.588287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:44.536 [2024-11-26 23:51:32.588293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:44.536 [2024-11-26 23:51:32.588300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:44.536 [2024-11-26 23:51:32.588306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:44.536 [2024-11-26 23:51:32.588313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:44.536 [2024-11-26 23:51:32.588319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:44.536 [2024-11-26 23:51:32.588327] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.536 [2024-11-26 23:51:32.588333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.536 [2024-11-26 23:51:32.588340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.536 [2024-11-26 23:51:32.588346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.536 [2024-11-26 23:51:32.588352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.536 [2024-11-26 23:51:32.588358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.536 [2024-11-26 23:51:32.588377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.536 [2024-11-26 23:51:32.588387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.536 [2024-11-26 23:51:32.588404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:17:44.536 [2024-11-26 23:51:32.588423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.536 [2024-11-26 23:51:32.588496] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:44.536 [2024-11-26 23:51:32.588508] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:47.057 [2024-11-26 23:51:35.154429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.057 [2024-11-26 23:51:35.154476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:47.057 [2024-11-26 23:51:35.154493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2565.919 ms 00:17:47.057 [2024-11-26 23:51:35.154501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.057 [2024-11-26 23:51:35.162493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.057 [2024-11-26 23:51:35.162531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:47.057 [2024-11-26 23:51:35.162546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.900 ms 00:17:47.057 [2024-11-26 23:51:35.162554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.057 [2024-11-26 23:51:35.162661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.057 [2024-11-26 23:51:35.162671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:47.057 [2024-11-26 23:51:35.162682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:47.057 [2024-11-26 23:51:35.162689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.057 [2024-11-26 23:51:35.183633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.057 [2024-11-26 23:51:35.183698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.057 [2024-11-26 23:51:35.183722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.865 ms 00:17:47.057 [2024-11-26 23:51:35.183738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.057 [2024-11-26 23:51:35.183847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.057 [2024-11-26 23:51:35.183880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.057 [2024-11-26 23:51:35.183898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:47.057 [2024-11-26 23:51:35.183913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.057 [2024-11-26 23:51:35.184402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.057 [2024-11-26 23:51:35.184441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.057 [2024-11-26 23:51:35.184482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:47.057 [2024-11-26 23:51:35.184499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.057 [2024-11-26 23:51:35.184725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.057 [2024-11-26 23:51:35.184750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.057 [2024-11-26 23:51:35.184784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:17:47.057 [2024-11-26 23:51:35.184819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.191528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.191560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.317 [2024-11-26 23:51:35.191581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.665 ms 00:17:47.317 [2024-11-26 23:51:35.191592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.199763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:47.317 [2024-11-26 23:51:35.213723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.213754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:47.317 [2024-11-26 23:51:35.213764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.034 ms 00:17:47.317 [2024-11-26 23:51:35.213784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.252146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.252183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:47.317 [2024-11-26 23:51:35.252193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.313 ms 00:17:47.317 [2024-11-26 23:51:35.252205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.252385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.252398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:47.317 [2024-11-26 23:51:35.252407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:17:47.317 [2024-11-26 23:51:35.252428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.255207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.255241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:47.317 [2024-11-26 23:51:35.255260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.736 ms 00:17:47.317 [2024-11-26 23:51:35.255270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.257577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.257619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:47.317 [2024-11-26 23:51:35.257630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:17:47.317 [2024-11-26 23:51:35.257639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.257958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.257981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:47.317 [2024-11-26 23:51:35.257990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:17:47.317 [2024-11-26 23:51:35.258001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.281511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.281556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:47.317 [2024-11-26 23:51:35.281566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.479 ms 00:17:47.317 [2024-11-26 23:51:35.281575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.285178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.285213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:47.317 [2024-11-26 23:51:35.285222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.522 ms 00:17:47.317 [2024-11-26 23:51:35.285231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.287874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.287906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:47.317 [2024-11-26 23:51:35.287915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:17:47.317 [2024-11-26 23:51:35.287924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.290991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.291023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:47.317 [2024-11-26 23:51:35.291032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:17:47.317 [2024-11-26 23:51:35.291043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.291095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.291107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:47.317 [2024-11-26 23:51:35.291115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:47.317 [2024-11-26 23:51:35.291124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.291205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.317 [2024-11-26 23:51:35.291220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:47.317 [2024-11-26 23:51:35.291228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:47.317 [2024-11-26 23:51:35.291237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.317 [2024-11-26 23:51:35.292218] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2712.300 ms, result 0 00:17:47.317 { 00:17:47.317 "name": "ftl0", 00:17:47.317 "uuid": "40092998-5ef6-470a-a011-b75585ff485c" 00:17:47.317 } 00:17:47.317 23:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:47.317 23:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:47.317 23:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.317 23:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:47.317 23:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.317 23:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.317 23:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:47.576 23:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:47.834 [ 00:17:47.834 { 00:17:47.834 "name": "ftl0", 00:17:47.834 "aliases": [ 00:17:47.834 "40092998-5ef6-470a-a011-b75585ff485c" 00:17:47.834 ], 00:17:47.834 "product_name": "FTL disk", 00:17:47.834 "block_size": 4096, 00:17:47.834 "num_blocks": 20971520, 00:17:47.834 "uuid": "40092998-5ef6-470a-a011-b75585ff485c", 00:17:47.834 "assigned_rate_limits": { 00:17:47.834 "rw_ios_per_sec": 0, 00:17:47.834 "rw_mbytes_per_sec": 0, 00:17:47.834 "r_mbytes_per_sec": 0, 00:17:47.834 "w_mbytes_per_sec": 0 00:17:47.834 }, 00:17:47.834 "claimed": false, 00:17:47.834 "zoned": false, 00:17:47.834 "supported_io_types": { 00:17:47.834 "read": true, 00:17:47.834 "write": true, 00:17:47.834 "unmap": true, 00:17:47.834 "flush": true, 00:17:47.834 "reset": false, 00:17:47.834 "nvme_admin": false, 00:17:47.834 "nvme_io": false, 00:17:47.834 "nvme_io_md": false, 00:17:47.834 "write_zeroes": true, 00:17:47.834 "zcopy": false, 00:17:47.834 "get_zone_info": false, 00:17:47.834 "zone_management": false, 00:17:47.834 "zone_append": false, 00:17:47.834 "compare": false, 00:17:47.834 "compare_and_write": false, 00:17:47.834 "abort": false, 00:17:47.834 "seek_hole": false, 00:17:47.834 "seek_data": false, 00:17:47.834 "copy": false, 00:17:47.834 "nvme_iov_md": false 00:17:47.834 }, 00:17:47.834 "driver_specific": { 00:17:47.834 "ftl": { 00:17:47.834 "base_bdev": "299311c8-2edd-47b4-8078-84b3a6c67ecc", 00:17:47.834 "cache": "nvc0n1p0" 00:17:47.834 } 00:17:47.834 } 00:17:47.834 } 00:17:47.834 ] 00:17:47.834 23:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:47.834 23:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:47.834 23:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:47.834 23:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:47.834 23:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:48.094 [2024-11-26 23:51:36.106781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.094 [2024-11-26 23:51:36.106828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:48.094 [2024-11-26 23:51:36.106843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.094 [2024-11-26 23:51:36.106851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.094 [2024-11-26 23:51:36.106888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.094 [2024-11-26 23:51:36.107316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.094 [2024-11-26 23:51:36.107349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:48.094 [2024-11-26 23:51:36.107359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:17:48.094 [2024-11-26 23:51:36.107369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.094 [2024-11-26 23:51:36.107953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.107977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:48.095 [2024-11-26 23:51:36.107986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:17:48.095 [2024-11-26 23:51:36.108009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.111250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.111274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:48.095 [2024-11-26 23:51:36.111283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:17:48.095 [2024-11-26 23:51:36.111296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.117037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.117062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:48.095 [2024-11-26 23:51:36.117071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.710 ms 00:17:48.095 [2024-11-26 23:51:36.117079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.118465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.118498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:48.095 [2024-11-26 23:51:36.118506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:17:48.095 [2024-11-26 23:51:36.118513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.122002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.122036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:48.095 [2024-11-26 23:51:36.122044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:17:48.095 [2024-11-26 23:51:36.122052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.122215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.122235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:48.095 [2024-11-26 23:51:36.122242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:48.095 [2024-11-26 23:51:36.122263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.123639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.123670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:48.095 [2024-11-26 23:51:36.123677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:17:48.095 [2024-11-26 23:51:36.123684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.124682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.124713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:48.095 [2024-11-26 23:51:36.124720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:17:48.095 [2024-11-26 23:51:36.124727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.125490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.125520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:48.095 [2024-11-26 23:51:36.125527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:17:48.095 [2024-11-26 23:51:36.125534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.126340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.095 [2024-11-26 23:51:36.126370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:48.095 [2024-11-26 23:51:36.126377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:17:48.095 [2024-11-26 23:51:36.126384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.095 [2024-11-26 23:51:36.126413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:48.095 [2024-11-26 23:51:36.126426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:48.095 [2024-11-26 23:51:36.126819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.126999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:48.096 [2024-11-26 23:51:36.127131] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:48.096 [2024-11-26 23:51:36.127140] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 40092998-5ef6-470a-a011-b75585ff485c 00:17:48.096 [2024-11-26 23:51:36.127149] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:48.096 [2024-11-26 23:51:36.127154] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:48.096 [2024-11-26 23:51:36.127161] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:48.096 [2024-11-26 23:51:36.127184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:48.096 [2024-11-26 23:51:36.127192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:48.096 [2024-11-26 23:51:36.127198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:48.096 [2024-11-26 23:51:36.127206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:48.096 [2024-11-26 23:51:36.127210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:48.096 [2024-11-26 23:51:36.127216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:48.096 [2024-11-26 23:51:36.127222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.096 [2024-11-26 23:51:36.127230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:48.096 [2024-11-26 23:51:36.127237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:17:48.096 [2024-11-26 23:51:36.127244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.128492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.096 [2024-11-26 23:51:36.128516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:48.096 [2024-11-26 23:51:36.128532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:17:48.096 [2024-11-26 23:51:36.128548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.128627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.096 [2024-11-26 23:51:36.128636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:48.096 [2024-11-26 23:51:36.128644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:48.096 [2024-11-26 23:51:36.128651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.133095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.096 [2024-11-26 23:51:36.133122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.096 [2024-11-26 23:51:36.133130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.096 [2024-11-26 23:51:36.133137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.133189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.096 [2024-11-26 23:51:36.133198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.096 [2024-11-26 23:51:36.133206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.096 [2024-11-26 23:51:36.133213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.133265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.096 [2024-11-26 23:51:36.133277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.096 [2024-11-26 23:51:36.133283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.096 [2024-11-26 23:51:36.133300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.133331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.096 [2024-11-26 23:51:36.133339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.096 [2024-11-26 23:51:36.133345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.096 [2024-11-26 23:51:36.133353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.141343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.096 [2024-11-26 23:51:36.141376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.096 [2024-11-26 23:51:36.141385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.096 [2024-11-26 23:51:36.141392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.147969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.096 [2024-11-26 23:51:36.148000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.096 [2024-11-26 23:51:36.148026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.096 [2024-11-26 23:51:36.148037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.148108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.096 [2024-11-26 23:51:36.148130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.096 [2024-11-26 23:51:36.148137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.096 [2024-11-26 23:51:36.148145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.096 [2024-11-26 23:51:36.148201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.096 [2024-11-26 23:51:36.148210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.096 [2024-11-26 23:51:36.148218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.096 [2024-11-26 23:51:36.148225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.097 [2024-11-26 23:51:36.148293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.097 [2024-11-26 23:51:36.148304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.097 [2024-11-26 23:51:36.148319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.097 [2024-11-26 23:51:36.148326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.097 [2024-11-26 23:51:36.148362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.097 [2024-11-26 23:51:36.148371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:48.097 [2024-11-26 23:51:36.148378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.097 [2024-11-26 23:51:36.148385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.097 [2024-11-26 23:51:36.148434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.097 [2024-11-26 23:51:36.148449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.097 [2024-11-26 23:51:36.148455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.097 [2024-11-26 23:51:36.148462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.097 [2024-11-26 23:51:36.148508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.097 [2024-11-26 23:51:36.148523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.097 [2024-11-26 23:51:36.148530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.097 [2024-11-26 23:51:36.148539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.097 [2024-11-26 23:51:36.148706] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 41.888 ms, result 0 00:17:48.097 true 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86222 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86222 ']' 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86222 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86222 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.097 killing process with pid 86222 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86222' 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86222 00:17:48.097 23:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86222 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:54.654 23:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:54.915 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:54.915 fio-3.35 00:17:54.915 Starting 1 thread 00:18:01.503 00:18:01.503 test: (groupid=0, jobs=1): err= 0: pid=86391: Tue Nov 26 23:51:48 2024 00:18:01.503 read: IOPS=788, BW=52.3MiB/s (54.9MB/s)(255MiB/4864msec) 00:18:01.503 slat (nsec): min=4119, max=52000, avg=6537.87, stdev=3376.57 00:18:01.503 clat (usec): min=300, max=1454, avg=573.32, stdev=155.29 00:18:01.504 lat (usec): min=308, max=1459, avg=579.86, stdev=155.99 00:18:01.504 clat percentiles (usec): 00:18:01.504 | 1.00th=[ 343], 5.00th=[ 396], 10.00th=[ 424], 20.00th=[ 469], 00:18:01.504 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 529], 60.00th=[ 537], 00:18:01.504 | 70.00th=[ 562], 80.00th=[ 635], 90.00th=[ 832], 95.00th=[ 889], 00:18:01.504 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1303], 99.95th=[ 1418], 00:18:01.504 | 99.99th=[ 1450] 00:18:01.504 write: IOPS=793, BW=52.7MiB/s (55.2MB/s)(256MiB/4860msec); 0 zone resets 00:18:01.504 slat (nsec): min=14573, max=98890, avg=22657.56, stdev=7054.92 00:18:01.504 clat (usec): min=321, max=1932, avg=648.44, stdev=177.67 00:18:01.504 lat (usec): min=343, max=1958, avg=671.09, stdev=179.22 00:18:01.504 clat percentiles (usec): 00:18:01.504 | 1.00th=[ 416], 5.00th=[ 482], 10.00th=[ 498], 20.00th=[ 545], 00:18:01.504 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 611], 00:18:01.504 | 70.00th=[ 644], 80.00th=[ 807], 90.00th=[ 914], 95.00th=[ 979], 00:18:01.504 | 99.00th=[ 1205], 99.50th=[ 1319], 99.90th=[ 1762], 99.95th=[ 1926], 00:18:01.504 | 99.99th=[ 1926] 00:18:01.504 bw ( KiB/s): min=45832, max=61472, per=100.00%, avg=54143.11, stdev=4371.32, samples=9 00:18:01.504 iops : min= 674, max= 904, avg=796.22, stdev=64.28, samples=9 00:18:01.504 lat (usec) : 500=20.05%, 750=61.05%, 1000=16.04% 00:18:01.504 lat (msec) : 2=2.86% 00:18:01.504 cpu : usr=98.68%, sys=0.33%, ctx=9, majf=0, minf=1326 00:18:01.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.504 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.504 00:18:01.504 Run status group 0 (all jobs): 00:18:01.504 READ: bw=52.3MiB/s (54.9MB/s), 52.3MiB/s-52.3MiB/s (54.9MB/s-54.9MB/s), io=255MiB (267MB), run=4864-4864msec 00:18:01.504 WRITE: bw=52.7MiB/s (55.2MB/s), 52.7MiB/s-52.7MiB/s (55.2MB/s-55.2MB/s), io=256MiB (269MB), run=4860-4860msec 00:18:01.504 ----------------------------------------------------- 00:18:01.504 Suppressions used: 00:18:01.504 count bytes template 00:18:01.504 1 5 /usr/src/fio/parse.c 00:18:01.504 1 8 libtcmalloc_minimal.so 00:18:01.504 1 904 libcrypto.so 00:18:01.504 ----------------------------------------------------- 00:18:01.504 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:01.504 23:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:01.504 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:01.504 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:01.504 fio-3.35 00:18:01.504 Starting 2 threads 00:18:28.132 00:18:28.132 first_half: (groupid=0, jobs=1): err= 0: pid=86487: Tue Nov 26 23:52:13 2024 00:18:28.132 read: IOPS=2802, BW=10.9MiB/s (11.5MB/s)(256MiB/23365msec) 00:18:28.132 slat (nsec): min=3020, max=39465, avg=5336.83, stdev=1183.04 00:18:28.132 clat (usec): min=506, max=294496, avg=38031.36, stdev=26194.81 00:18:28.132 lat (usec): min=512, max=294501, avg=38036.70, stdev=26194.89 00:18:28.132 clat percentiles (msec): 00:18:28.132 | 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 30], 00:18:28.132 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 34], 00:18:28.132 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 46], 95.00th=[ 80], 00:18:28.132 | 99.00th=[ 159], 99.50th=[ 205], 99.90th=[ 275], 99.95th=[ 288], 00:18:28.132 | 99.99th=[ 292] 00:18:28.132 write: IOPS=2809, BW=11.0MiB/s (11.5MB/s)(256MiB/23328msec); 0 zone resets 00:18:28.132 slat (usec): min=3, max=414, avg= 6.70, stdev= 4.45 00:18:28.132 clat (usec): min=338, max=69016, avg=7606.96, stdev=8590.07 00:18:28.132 lat (usec): min=344, max=69021, avg=7613.66, stdev=8590.30 00:18:28.132 clat percentiles (usec): 00:18:28.132 | 1.00th=[ 742], 5.00th=[ 881], 10.00th=[ 1106], 20.00th=[ 2474], 00:18:28.132 | 30.00th=[ 3326], 40.00th=[ 4293], 50.00th=[ 5080], 60.00th=[ 5604], 00:18:28.132 | 70.00th=[ 6259], 80.00th=[10159], 90.00th=[19268], 95.00th=[27132], 00:18:28.132 | 99.00th=[44303], 99.50th=[47449], 99.90th=[65799], 99.95th=[67634], 00:18:28.132 | 99.99th=[68682] 00:18:28.132 bw ( KiB/s): min= 1432, max=56304, per=100.00%, avg=24798.48, stdev=15071.46, samples=21 00:18:28.132 iops : min= 358, max=14076, avg=6199.62, stdev=3767.86, samples=21 00:18:28.132 lat (usec) : 500=0.02%, 750=0.54%, 1000=3.64% 00:18:28.132 lat (msec) : 2=3.76%, 4=10.78%, 10=22.12%, 20=6.29%, 50=48.31% 00:18:28.132 lat (msec) : 100=2.60%, 250=1.84%, 500=0.10% 00:18:28.132 cpu : usr=99.18%, sys=0.18%, ctx=45, majf=0, minf=5587 00:18:28.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:28.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.132 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.132 issued rwts: total=65469,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.132 second_half: (groupid=0, jobs=1): err= 0: pid=86488: Tue Nov 26 23:52:13 2024 00:18:28.132 read: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(256MiB/23158msec) 00:18:28.132 slat (nsec): min=3168, max=47566, avg=4522.62, stdev=1192.95 00:18:28.132 clat (msec): min=9, max=299, avg=38.29, stdev=24.01 00:18:28.132 lat (msec): min=9, max=299, avg=38.29, stdev=24.01 00:18:28.132 clat percentiles (msec): 00:18:28.132 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:18:28.132 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:18:28.132 | 70.00th=[ 36], 80.00th=[ 38], 90.00th=[ 48], 95.00th=[ 70], 00:18:28.132 | 99.00th=[ 153], 99.50th=[ 199], 99.90th=[ 268], 99.95th=[ 271], 00:18:28.132 | 99.99th=[ 296] 00:18:28.132 write: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(256MiB/23049msec); 0 zone resets 00:18:28.132 slat (usec): min=3, max=1485, avg= 5.95, stdev=11.94 00:18:28.132 clat (usec): min=377, max=57287, avg=6947.38, stdev=6557.18 00:18:28.132 lat (usec): min=385, max=57293, avg=6953.33, stdev=6558.03 00:18:28.132 clat percentiles (usec): 00:18:28.132 | 1.00th=[ 807], 5.00th=[ 1680], 10.00th=[ 2442], 20.00th=[ 3326], 00:18:28.132 | 30.00th=[ 3982], 40.00th=[ 4686], 50.00th=[ 5276], 60.00th=[ 5669], 00:18:28.132 | 70.00th=[ 6194], 80.00th=[ 8586], 90.00th=[13566], 95.00th=[19530], 00:18:28.132 | 99.00th=[39060], 99.50th=[46924], 99.90th=[50070], 99.95th=[53216], 00:18:28.132 | 99.99th=[55313] 00:18:28.132 bw ( KiB/s): min= 1672, max=41816, per=100.00%, avg=24873.90, stdev=11628.69, samples=21 00:18:28.132 iops : min= 418, max=10454, avg=6218.48, stdev=2907.17, samples=21 00:18:28.132 lat (usec) : 500=0.02%, 750=0.23%, 1000=0.92% 00:18:28.132 lat (msec) : 2=1.92%, 4=12.00%, 10=26.72%, 20=5.95%, 50=47.71% 00:18:28.132 lat (msec) : 100=2.89%, 250=1.45%, 500=0.18% 00:18:28.132 cpu : usr=99.30%, sys=0.13%, ctx=35, majf=0, minf=5555 00:18:28.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:28.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.132 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:28.133 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:28.133 00:18:28.133 Run status group 0 (all jobs): 00:18:28.133 READ: bw=21.9MiB/s (23.0MB/s), 10.9MiB/s-11.0MiB/s (11.5MB/s-11.6MB/s), io=512MiB (536MB), run=23158-23365msec 00:18:28.133 WRITE: bw=21.9MiB/s (23.0MB/s), 11.0MiB/s-11.1MiB/s (11.5MB/s-11.6MB/s), io=512MiB (537MB), run=23049-23328msec 00:18:28.133 ----------------------------------------------------- 00:18:28.133 Suppressions used: 00:18:28.133 count bytes template 00:18:28.133 2 10 /usr/src/fio/parse.c 00:18:28.133 3 288 /usr/src/fio/iolog.c 00:18:28.133 1 8 libtcmalloc_minimal.so 00:18:28.133 1 904 libcrypto.so 00:18:28.133 ----------------------------------------------------- 00:18:28.133 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:28.133 23:52:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:28.133 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:28.133 fio-3.35 00:18:28.133 Starting 1 thread 00:18:43.026 00:18:43.026 test: (groupid=0, jobs=1): err= 0: pid=86795: Tue Nov 26 23:52:29 2024 00:18:43.026 read: IOPS=7939, BW=31.0MiB/s (32.5MB/s)(255MiB/8212msec) 00:18:43.026 slat (nsec): min=3101, max=22729, avg=4846.43, stdev=1058.44 00:18:43.026 clat (usec): min=543, max=32075, avg=16112.41, stdev=1889.05 00:18:43.026 lat (usec): min=547, max=32079, avg=16117.26, stdev=1889.10 00:18:43.026 clat percentiles (usec): 00:18:43.026 | 1.00th=[13698], 5.00th=[13960], 10.00th=[14091], 20.00th=[15401], 00:18:43.026 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16057], 60.00th=[16319], 00:18:43.026 | 70.00th=[16319], 80.00th=[16450], 90.00th=[16712], 95.00th=[18220], 00:18:43.026 | 99.00th=[25035], 99.50th=[25560], 99.90th=[28705], 99.95th=[28967], 00:18:43.026 | 99.99th=[31327] 00:18:43.026 write: IOPS=12.7k, BW=49.7MiB/s (52.1MB/s)(256MiB/5154msec); 0 zone resets 00:18:43.026 slat (usec): min=4, max=702, avg= 6.74, stdev= 4.34 00:18:43.026 clat (usec): min=453, max=50845, avg=10021.02, stdev=10405.74 00:18:43.026 lat (usec): min=459, max=50851, avg=10027.76, stdev=10405.94 00:18:43.026 clat percentiles (usec): 00:18:43.026 | 1.00th=[ 627], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 832], 00:18:43.026 | 30.00th=[ 1020], 40.00th=[ 1434], 50.00th=[ 5473], 60.00th=[11076], 00:18:43.026 | 70.00th=[15139], 80.00th=[18482], 90.00th=[27919], 95.00th=[29754], 00:18:43.026 | 99.00th=[35390], 99.50th=[36963], 99.90th=[40633], 99.95th=[41681], 00:18:43.026 | 99.99th=[48497] 00:18:43.026 bw ( KiB/s): min=14536, max=78600, per=93.70%, avg=47657.27, stdev=18999.79, samples=11 00:18:43.026 iops : min= 3634, max=19650, avg=11914.27, stdev=4750.00, samples=11 00:18:43.026 lat (usec) : 500=0.01%, 750=5.51%, 1000=9.14% 00:18:43.026 lat (msec) : 2=5.98%, 4=0.54%, 10=7.93%, 20=60.10%, 50=10.80% 00:18:43.026 lat (msec) : 100=0.01% 00:18:43.026 cpu : usr=99.03%, sys=0.23%, ctx=28, majf=0, minf=5577 00:18:43.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:43.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.026 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:43.026 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:43.026 00:18:43.026 Run status group 0 (all jobs): 00:18:43.026 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=255MiB (267MB), run=8212-8212msec 00:18:43.026 WRITE: bw=49.7MiB/s (52.1MB/s), 49.7MiB/s-49.7MiB/s (52.1MB/s-52.1MB/s), io=256MiB (268MB), run=5154-5154msec 00:18:43.026 ----------------------------------------------------- 00:18:43.026 Suppressions used: 00:18:43.026 count bytes template 00:18:43.026 1 5 /usr/src/fio/parse.c 00:18:43.026 2 192 /usr/src/fio/iolog.c 00:18:43.026 1 8 libtcmalloc_minimal.so 00:18:43.026 1 904 libcrypto.so 00:18:43.026 ----------------------------------------------------- 00:18:43.026 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:43.026 Remove shared memory files 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69145 /dev/shm/spdk_tgt_trace.pid85164 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:43.026 00:18:43.026 real 1m2.064s 00:18:43.026 user 2m13.988s 00:18:43.026 sys 0m3.093s 00:18:43.026 ************************************ 00:18:43.026 END TEST ftl_fio_basic 00:18:43.026 ************************************ 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.026 23:52:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.026 23:52:31 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:43.026 23:52:31 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:43.026 23:52:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.026 23:52:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:43.026 ************************************ 00:18:43.026 START TEST ftl_bdevperf 00:18:43.026 ************************************ 00:18:43.026 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:43.026 * Looking for test storage... 00:18:43.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:43.026 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.026 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.026 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.288 --rc genhtml_branch_coverage=1 00:18:43.288 --rc genhtml_function_coverage=1 00:18:43.288 --rc genhtml_legend=1 00:18:43.288 --rc geninfo_all_blocks=1 00:18:43.288 --rc geninfo_unexecuted_blocks=1 00:18:43.288 00:18:43.288 ' 00:18:43.288 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.288 --rc genhtml_branch_coverage=1 00:18:43.288 --rc genhtml_function_coverage=1 00:18:43.288 --rc genhtml_legend=1 00:18:43.288 --rc geninfo_all_blocks=1 00:18:43.288 --rc geninfo_unexecuted_blocks=1 00:18:43.288 00:18:43.289 ' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.289 --rc genhtml_branch_coverage=1 00:18:43.289 --rc genhtml_function_coverage=1 00:18:43.289 --rc genhtml_legend=1 00:18:43.289 --rc geninfo_all_blocks=1 00:18:43.289 --rc geninfo_unexecuted_blocks=1 00:18:43.289 00:18:43.289 ' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.289 --rc genhtml_branch_coverage=1 00:18:43.289 --rc genhtml_function_coverage=1 00:18:43.289 --rc genhtml_legend=1 00:18:43.289 --rc geninfo_all_blocks=1 00:18:43.289 --rc geninfo_unexecuted_blocks=1 00:18:43.289 00:18:43.289 ' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=87022 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 87022 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 87022 ']' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.289 23:52:31 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:43.289 [2024-11-26 23:52:31.286620] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:18:43.289 [2024-11-26 23:52:31.286823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87022 ] 00:18:43.549 [2024-11-26 23:52:31.433635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.549 [2024-11-26 23:52:31.476228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.121 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.121 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:44.121 23:52:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:44.121 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:44.121 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:44.121 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:44.121 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:44.121 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:44.381 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:44.382 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:44.382 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:44.382 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:44.382 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:44.382 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:44.382 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:44.382 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:44.642 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:44.642 { 00:18:44.642 "name": "nvme0n1", 00:18:44.642 "aliases": [ 00:18:44.642 "a71e30a9-33fa-48f3-9163-0c30a9294e55" 00:18:44.642 ], 00:18:44.642 "product_name": "NVMe disk", 00:18:44.642 "block_size": 4096, 00:18:44.642 "num_blocks": 1310720, 00:18:44.643 "uuid": "a71e30a9-33fa-48f3-9163-0c30a9294e55", 00:18:44.643 "numa_id": -1, 00:18:44.643 "assigned_rate_limits": { 00:18:44.643 "rw_ios_per_sec": 0, 00:18:44.643 "rw_mbytes_per_sec": 0, 00:18:44.643 "r_mbytes_per_sec": 0, 00:18:44.643 "w_mbytes_per_sec": 0 00:18:44.643 }, 00:18:44.643 "claimed": true, 00:18:44.643 "claim_type": "read_many_write_one", 00:18:44.643 "zoned": false, 00:18:44.643 "supported_io_types": { 00:18:44.643 "read": true, 00:18:44.643 "write": true, 00:18:44.643 "unmap": true, 00:18:44.643 "flush": true, 00:18:44.643 "reset": true, 00:18:44.643 "nvme_admin": true, 00:18:44.643 "nvme_io": true, 00:18:44.643 "nvme_io_md": false, 00:18:44.643 "write_zeroes": true, 00:18:44.643 "zcopy": false, 00:18:44.643 "get_zone_info": false, 00:18:44.643 "zone_management": false, 00:18:44.643 "zone_append": false, 00:18:44.643 "compare": true, 00:18:44.643 "compare_and_write": false, 00:18:44.643 "abort": true, 00:18:44.643 "seek_hole": false, 00:18:44.643 "seek_data": false, 00:18:44.643 "copy": true, 00:18:44.643 "nvme_iov_md": false 00:18:44.643 }, 00:18:44.643 "driver_specific": { 00:18:44.643 "nvme": [ 00:18:44.643 { 00:18:44.643 "pci_address": "0000:00:11.0", 00:18:44.643 "trid": { 00:18:44.643 "trtype": "PCIe", 00:18:44.643 "traddr": "0000:00:11.0" 00:18:44.643 }, 00:18:44.643 "ctrlr_data": { 00:18:44.643 "cntlid": 0, 00:18:44.643 "vendor_id": "0x1b36", 00:18:44.643 "model_number": "QEMU NVMe Ctrl", 00:18:44.643 "serial_number": "12341", 00:18:44.643 "firmware_revision": "8.0.0", 00:18:44.643 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:44.643 "oacs": { 00:18:44.643 "security": 0, 00:18:44.643 "format": 1, 00:18:44.643 "firmware": 0, 00:18:44.643 "ns_manage": 1 00:18:44.643 }, 00:18:44.643 "multi_ctrlr": false, 00:18:44.643 "ana_reporting": false 00:18:44.643 }, 00:18:44.643 "vs": { 00:18:44.643 "nvme_version": "1.4" 00:18:44.643 }, 00:18:44.643 "ns_data": { 00:18:44.643 "id": 1, 00:18:44.643 "can_share": false 00:18:44.643 } 00:18:44.643 } 00:18:44.643 ], 00:18:44.643 "mp_policy": "active_passive" 00:18:44.643 } 00:18:44.643 } 00:18:44.643 ]' 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:44.643 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:44.904 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=775ebe16-78b4-4d6b-aaa6-11b14ca20de9 00:18:44.904 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:44.904 23:52:32 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 775ebe16-78b4-4d6b-aaa6-11b14ca20de9 00:18:45.166 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:45.426 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=e9770fbd-b1a3-4354-96cc-66c8a21fd5cb 00:18:45.426 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e9770fbd-b1a3-4354-96cc-66c8a21fd5cb 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:45.687 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:45.947 { 00:18:45.947 "name": "4a4fc92f-248b-4a49-b47c-b5b737c7cb1d", 00:18:45.947 "aliases": [ 00:18:45.947 "lvs/nvme0n1p0" 00:18:45.947 ], 00:18:45.947 "product_name": "Logical Volume", 00:18:45.947 "block_size": 4096, 00:18:45.947 "num_blocks": 26476544, 00:18:45.947 "uuid": "4a4fc92f-248b-4a49-b47c-b5b737c7cb1d", 00:18:45.947 "assigned_rate_limits": { 00:18:45.947 "rw_ios_per_sec": 0, 00:18:45.947 "rw_mbytes_per_sec": 0, 00:18:45.947 "r_mbytes_per_sec": 0, 00:18:45.947 "w_mbytes_per_sec": 0 00:18:45.947 }, 00:18:45.947 "claimed": false, 00:18:45.947 "zoned": false, 00:18:45.947 "supported_io_types": { 00:18:45.947 "read": true, 00:18:45.947 "write": true, 00:18:45.947 "unmap": true, 00:18:45.947 "flush": false, 00:18:45.947 "reset": true, 00:18:45.947 "nvme_admin": false, 00:18:45.947 "nvme_io": false, 00:18:45.947 "nvme_io_md": false, 00:18:45.947 "write_zeroes": true, 00:18:45.947 "zcopy": false, 00:18:45.947 "get_zone_info": false, 00:18:45.947 "zone_management": false, 00:18:45.947 "zone_append": false, 00:18:45.947 "compare": false, 00:18:45.947 "compare_and_write": false, 00:18:45.947 "abort": false, 00:18:45.947 "seek_hole": true, 00:18:45.947 "seek_data": true, 00:18:45.947 "copy": false, 00:18:45.947 "nvme_iov_md": false 00:18:45.947 }, 00:18:45.947 "driver_specific": { 00:18:45.947 "lvol": { 00:18:45.947 "lvol_store_uuid": "e9770fbd-b1a3-4354-96cc-66c8a21fd5cb", 00:18:45.947 "base_bdev": "nvme0n1", 00:18:45.947 "thin_provision": true, 00:18:45.947 "num_allocated_clusters": 0, 00:18:45.947 "snapshot": false, 00:18:45.947 "clone": false, 00:18:45.947 "esnap_clone": false 00:18:45.947 } 00:18:45.947 } 00:18:45.947 } 00:18:45.947 ]' 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:45.947 23:52:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:46.206 23:52:34 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:46.206 23:52:34 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:46.206 23:52:34 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:46.206 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:46.206 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:46.206 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:46.206 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:46.206 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:46.464 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:46.464 { 00:18:46.464 "name": "4a4fc92f-248b-4a49-b47c-b5b737c7cb1d", 00:18:46.464 "aliases": [ 00:18:46.464 "lvs/nvme0n1p0" 00:18:46.464 ], 00:18:46.464 "product_name": "Logical Volume", 00:18:46.464 "block_size": 4096, 00:18:46.464 "num_blocks": 26476544, 00:18:46.464 "uuid": "4a4fc92f-248b-4a49-b47c-b5b737c7cb1d", 00:18:46.464 "assigned_rate_limits": { 00:18:46.464 "rw_ios_per_sec": 0, 00:18:46.464 "rw_mbytes_per_sec": 0, 00:18:46.464 "r_mbytes_per_sec": 0, 00:18:46.464 "w_mbytes_per_sec": 0 00:18:46.464 }, 00:18:46.464 "claimed": false, 00:18:46.464 "zoned": false, 00:18:46.464 "supported_io_types": { 00:18:46.464 "read": true, 00:18:46.464 "write": true, 00:18:46.464 "unmap": true, 00:18:46.464 "flush": false, 00:18:46.464 "reset": true, 00:18:46.464 "nvme_admin": false, 00:18:46.464 "nvme_io": false, 00:18:46.464 "nvme_io_md": false, 00:18:46.464 "write_zeroes": true, 00:18:46.464 "zcopy": false, 00:18:46.464 "get_zone_info": false, 00:18:46.464 "zone_management": false, 00:18:46.464 "zone_append": false, 00:18:46.464 "compare": false, 00:18:46.464 "compare_and_write": false, 00:18:46.464 "abort": false, 00:18:46.464 "seek_hole": true, 00:18:46.464 "seek_data": true, 00:18:46.464 "copy": false, 00:18:46.464 "nvme_iov_md": false 00:18:46.464 }, 00:18:46.464 "driver_specific": { 00:18:46.464 "lvol": { 00:18:46.464 "lvol_store_uuid": "e9770fbd-b1a3-4354-96cc-66c8a21fd5cb", 00:18:46.464 "base_bdev": "nvme0n1", 00:18:46.464 "thin_provision": true, 00:18:46.464 "num_allocated_clusters": 0, 00:18:46.464 "snapshot": false, 00:18:46.464 "clone": false, 00:18:46.464 "esnap_clone": false 00:18:46.464 } 00:18:46.464 } 00:18:46.465 } 00:18:46.465 ]' 00:18:46.465 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:46.465 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:46.465 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:46.465 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:46.465 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:46.465 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:46.465 23:52:34 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:46.465 23:52:34 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:46.723 23:52:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:46.723 23:52:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:46.723 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:46.723 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:46.723 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:46.723 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:46.723 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a4fc92f-248b-4a49-b47c-b5b737c7cb1d 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:46.982 { 00:18:46.982 "name": "4a4fc92f-248b-4a49-b47c-b5b737c7cb1d", 00:18:46.982 "aliases": [ 00:18:46.982 "lvs/nvme0n1p0" 00:18:46.982 ], 00:18:46.982 "product_name": "Logical Volume", 00:18:46.982 "block_size": 4096, 00:18:46.982 "num_blocks": 26476544, 00:18:46.982 "uuid": "4a4fc92f-248b-4a49-b47c-b5b737c7cb1d", 00:18:46.982 "assigned_rate_limits": { 00:18:46.982 "rw_ios_per_sec": 0, 00:18:46.982 "rw_mbytes_per_sec": 0, 00:18:46.982 "r_mbytes_per_sec": 0, 00:18:46.982 "w_mbytes_per_sec": 0 00:18:46.982 }, 00:18:46.982 "claimed": false, 00:18:46.982 "zoned": false, 00:18:46.982 "supported_io_types": { 00:18:46.982 "read": true, 00:18:46.982 "write": true, 00:18:46.982 "unmap": true, 00:18:46.982 "flush": false, 00:18:46.982 "reset": true, 00:18:46.982 "nvme_admin": false, 00:18:46.982 "nvme_io": false, 00:18:46.982 "nvme_io_md": false, 00:18:46.982 "write_zeroes": true, 00:18:46.982 "zcopy": false, 00:18:46.982 "get_zone_info": false, 00:18:46.982 "zone_management": false, 00:18:46.982 "zone_append": false, 00:18:46.982 "compare": false, 00:18:46.982 "compare_and_write": false, 00:18:46.982 "abort": false, 00:18:46.982 "seek_hole": true, 00:18:46.982 "seek_data": true, 00:18:46.982 "copy": false, 00:18:46.982 "nvme_iov_md": false 00:18:46.982 }, 00:18:46.982 "driver_specific": { 00:18:46.982 "lvol": { 00:18:46.982 "lvol_store_uuid": "e9770fbd-b1a3-4354-96cc-66c8a21fd5cb", 00:18:46.982 "base_bdev": "nvme0n1", 00:18:46.982 "thin_provision": true, 00:18:46.982 "num_allocated_clusters": 0, 00:18:46.982 "snapshot": false, 00:18:46.982 "clone": false, 00:18:46.982 "esnap_clone": false 00:18:46.982 } 00:18:46.982 } 00:18:46.982 } 00:18:46.982 ]' 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:46.982 23:52:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4a4fc92f-248b-4a49-b47c-b5b737c7cb1d -c nvc0n1p0 --l2p_dram_limit 20 00:18:47.242 [2024-11-26 23:52:35.128879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.128923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:47.242 [2024-11-26 23:52:35.128937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.242 [2024-11-26 23:52:35.128945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.128993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.129001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.242 [2024-11-26 23:52:35.129011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:47.242 [2024-11-26 23:52:35.129017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.129034] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:47.242 [2024-11-26 23:52:35.129252] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:47.242 [2024-11-26 23:52:35.129265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.129273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.242 [2024-11-26 23:52:35.129286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:18:47.242 [2024-11-26 23:52:35.129292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.129322] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a326b5dd-6ac6-47b5-a667-fe0efef50af3 00:18:47.242 [2024-11-26 23:52:35.130622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.130652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:47.242 [2024-11-26 23:52:35.130662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:47.242 [2024-11-26 23:52:35.130673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.137651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.137688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.242 [2024-11-26 23:52:35.137699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.926 ms 00:18:47.242 [2024-11-26 23:52:35.137713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.137775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.137786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.242 [2024-11-26 23:52:35.137811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:47.242 [2024-11-26 23:52:35.137820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.137852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.137862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:47.242 [2024-11-26 23:52:35.137869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:47.242 [2024-11-26 23:52:35.137883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.137900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:47.242 [2024-11-26 23:52:35.139579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.139606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.242 [2024-11-26 23:52:35.139617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:18:47.242 [2024-11-26 23:52:35.139624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.139653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.242 [2024-11-26 23:52:35.139661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:47.242 [2024-11-26 23:52:35.139671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:47.242 [2024-11-26 23:52:35.139677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.242 [2024-11-26 23:52:35.139701] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:47.242 [2024-11-26 23:52:35.139830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:47.242 [2024-11-26 23:52:35.139841] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:47.242 [2024-11-26 23:52:35.139850] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:47.242 [2024-11-26 23:52:35.139860] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:47.242 [2024-11-26 23:52:35.139871] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:47.242 [2024-11-26 23:52:35.139882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:47.242 [2024-11-26 23:52:35.139888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:47.243 [2024-11-26 23:52:35.139895] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:47.243 [2024-11-26 23:52:35.139903] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:47.243 [2024-11-26 23:52:35.139911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.243 [2024-11-26 23:52:35.139916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:47.243 [2024-11-26 23:52:35.139924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:18:47.243 [2024-11-26 23:52:35.139930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.243 [2024-11-26 23:52:35.139996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.243 [2024-11-26 23:52:35.140002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:47.243 [2024-11-26 23:52:35.140010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:47.243 [2024-11-26 23:52:35.140016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.243 [2024-11-26 23:52:35.140095] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:47.243 [2024-11-26 23:52:35.140107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:47.243 [2024-11-26 23:52:35.140115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:47.243 [2024-11-26 23:52:35.140136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:47.243 [2024-11-26 23:52:35.140156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.243 [2024-11-26 23:52:35.140167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:47.243 [2024-11-26 23:52:35.140172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:47.243 [2024-11-26 23:52:35.140180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.243 [2024-11-26 23:52:35.140185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:47.243 [2024-11-26 23:52:35.140192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:47.243 [2024-11-26 23:52:35.140197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:47.243 [2024-11-26 23:52:35.140209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:47.243 [2024-11-26 23:52:35.140227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:47.243 [2024-11-26 23:52:35.140243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:47.243 [2024-11-26 23:52:35.140266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:47.243 [2024-11-26 23:52:35.140285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:47.243 [2024-11-26 23:52:35.140304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.243 [2024-11-26 23:52:35.140315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:47.243 [2024-11-26 23:52:35.140320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:47.243 [2024-11-26 23:52:35.140327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.243 [2024-11-26 23:52:35.140332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:47.243 [2024-11-26 23:52:35.140338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:47.243 [2024-11-26 23:52:35.140343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:47.243 [2024-11-26 23:52:35.140355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:47.243 [2024-11-26 23:52:35.140361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140366] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:47.243 [2024-11-26 23:52:35.140375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:47.243 [2024-11-26 23:52:35.140381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.243 [2024-11-26 23:52:35.140394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:47.243 [2024-11-26 23:52:35.140400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:47.243 [2024-11-26 23:52:35.140405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:47.243 [2024-11-26 23:52:35.140412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:47.243 [2024-11-26 23:52:35.140417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:47.243 [2024-11-26 23:52:35.140424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:47.243 [2024-11-26 23:52:35.140432] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:47.243 [2024-11-26 23:52:35.140443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.243 [2024-11-26 23:52:35.140450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:47.243 [2024-11-26 23:52:35.140458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:47.243 [2024-11-26 23:52:35.140465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:47.243 [2024-11-26 23:52:35.140472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:47.243 [2024-11-26 23:52:35.140477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:47.243 [2024-11-26 23:52:35.140486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:47.243 [2024-11-26 23:52:35.140492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:47.243 [2024-11-26 23:52:35.140503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:47.243 [2024-11-26 23:52:35.140509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:47.243 [2024-11-26 23:52:35.140515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:47.243 [2024-11-26 23:52:35.140521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:47.243 [2024-11-26 23:52:35.140528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:47.243 [2024-11-26 23:52:35.140533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:47.243 [2024-11-26 23:52:35.140540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:47.243 [2024-11-26 23:52:35.140546] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:47.243 [2024-11-26 23:52:35.140558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.243 [2024-11-26 23:52:35.140565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:47.243 [2024-11-26 23:52:35.140572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:47.243 [2024-11-26 23:52:35.140577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:47.243 [2024-11-26 23:52:35.140584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:47.243 [2024-11-26 23:52:35.140590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.243 [2024-11-26 23:52:35.140600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:47.243 [2024-11-26 23:52:35.140606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:18:47.243 [2024-11-26 23:52:35.140613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.243 [2024-11-26 23:52:35.140638] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:47.243 [2024-11-26 23:52:35.140647] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:51.448 [2024-11-26 23:52:39.110840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.448 [2024-11-26 23:52:39.110987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:51.449 [2024-11-26 23:52:39.111034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3970.180 ms 00:18:51.449 [2024-11-26 23:52:39.111064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.125164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.125212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:51.449 [2024-11-26 23:52:39.125226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.927 ms 00:18:51.449 [2024-11-26 23:52:39.125239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.125347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.125360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:51.449 [2024-11-26 23:52:39.125372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:51.449 [2024-11-26 23:52:39.125382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.149064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.149145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:51.449 [2024-11-26 23:52:39.149170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.639 ms 00:18:51.449 [2024-11-26 23:52:39.149191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.149252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.149280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:51.449 [2024-11-26 23:52:39.149296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.449 [2024-11-26 23:52:39.149315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.149943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.149999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:51.449 [2024-11-26 23:52:39.150021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:18:51.449 [2024-11-26 23:52:39.150045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.150266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.150287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:51.449 [2024-11-26 23:52:39.150308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:18:51.449 [2024-11-26 23:52:39.150325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.158026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.158063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:51.449 [2024-11-26 23:52:39.158073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.672 ms 00:18:51.449 [2024-11-26 23:52:39.158082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.167396] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:51.449 [2024-11-26 23:52:39.173727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.173758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:51.449 [2024-11-26 23:52:39.173771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.579 ms 00:18:51.449 [2024-11-26 23:52:39.173779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.245036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.245097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:51.449 [2024-11-26 23:52:39.245113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.219 ms 00:18:51.449 [2024-11-26 23:52:39.245125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.245324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.245335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:51.449 [2024-11-26 23:52:39.245345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:18:51.449 [2024-11-26 23:52:39.245357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.250062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.250099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:51.449 [2024-11-26 23:52:39.250112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:18:51.449 [2024-11-26 23:52:39.250120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.253855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.253887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:51.449 [2024-11-26 23:52:39.253900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:18:51.449 [2024-11-26 23:52:39.253907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.254223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.254233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:51.449 [2024-11-26 23:52:39.254246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:18:51.449 [2024-11-26 23:52:39.254262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.288897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.288942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:51.449 [2024-11-26 23:52:39.288955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.601 ms 00:18:51.449 [2024-11-26 23:52:39.288963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.294489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.294522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:51.449 [2024-11-26 23:52:39.294534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.479 ms 00:18:51.449 [2024-11-26 23:52:39.294542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.298435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.298465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:51.449 [2024-11-26 23:52:39.298476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.855 ms 00:18:51.449 [2024-11-26 23:52:39.298483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.302211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.302242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:51.449 [2024-11-26 23:52:39.302256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:18:51.449 [2024-11-26 23:52:39.302263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.302300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.302313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:51.449 [2024-11-26 23:52:39.302323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:51.449 [2024-11-26 23:52:39.302331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.302400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.449 [2024-11-26 23:52:39.302409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:51.449 [2024-11-26 23:52:39.302419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:51.449 [2024-11-26 23:52:39.302426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.449 [2024-11-26 23:52:39.303416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4174.070 ms, result 0 00:18:51.449 { 00:18:51.449 "name": "ftl0", 00:18:51.449 "uuid": "a326b5dd-6ac6-47b5-a667-fe0efef50af3" 00:18:51.449 } 00:18:51.449 23:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:51.449 23:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:51.449 23:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:51.449 23:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:51.709 [2024-11-26 23:52:39.608847] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:51.709 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:51.709 Zero copy mechanism will not be used. 00:18:51.709 Running I/O for 4 seconds... 00:18:53.593 1260.00 IOPS, 83.67 MiB/s [2024-11-26T23:52:42.666Z] 1277.50 IOPS, 84.83 MiB/s [2024-11-26T23:52:44.045Z] 1221.33 IOPS, 81.10 MiB/s [2024-11-26T23:52:44.045Z] 1430.50 IOPS, 94.99 MiB/s 00:18:55.914 Latency(us) 00:18:55.914 [2024-11-26T23:52:44.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.914 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:55.914 ftl0 : 4.00 1430.15 94.97 0.00 0.00 736.13 163.84 2810.49 00:18:55.914 [2024-11-26T23:52:44.045Z] =================================================================================================================== 00:18:55.914 [2024-11-26T23:52:44.045Z] Total : 1430.15 94.97 0.00 0.00 736.13 163.84 2810.49 00:18:55.914 [2024-11-26 23:52:43.616521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:55.914 { 00:18:55.914 "results": [ 00:18:55.914 { 00:18:55.914 "job": "ftl0", 00:18:55.914 "core_mask": "0x1", 00:18:55.914 "workload": "randwrite", 00:18:55.914 "status": "finished", 00:18:55.914 "queue_depth": 1, 00:18:55.914 "io_size": 69632, 00:18:55.914 "runtime": 4.001679, 00:18:55.914 "iops": 1430.1496946656641, 00:18:55.914 "mibps": 94.97087816139175, 00:18:55.914 "io_failed": 0, 00:18:55.914 "io_timeout": 0, 00:18:55.914 "avg_latency_us": 736.1278468796623, 00:18:55.914 "min_latency_us": 163.84, 00:18:55.914 "max_latency_us": 2810.4861538461537 00:18:55.914 } 00:18:55.914 ], 00:18:55.914 "core_count": 1 00:18:55.914 } 00:18:55.914 23:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:55.914 [2024-11-26 23:52:43.726894] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:55.914 Running I/O for 4 seconds... 00:18:57.805 7772.00 IOPS, 30.36 MiB/s [2024-11-26T23:52:46.881Z] 6382.50 IOPS, 24.93 MiB/s [2024-11-26T23:52:47.829Z] 5929.00 IOPS, 23.16 MiB/s [2024-11-26T23:52:47.829Z] 5772.00 IOPS, 22.55 MiB/s 00:18:59.698 Latency(us) 00:18:59.698 [2024-11-26T23:52:47.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.698 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:59.698 ftl0 : 4.04 5748.10 22.45 0.00 0.00 22166.01 234.73 56461.78 00:18:59.698 [2024-11-26T23:52:47.829Z] =================================================================================================================== 00:18:59.698 [2024-11-26T23:52:47.829Z] Total : 5748.10 22.45 0.00 0.00 22166.01 0.00 56461.78 00:18:59.698 [2024-11-26 23:52:47.771224] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:59.698 { 00:18:59.698 "results": [ 00:18:59.698 { 00:18:59.698 "job": "ftl0", 00:18:59.698 "core_mask": "0x1", 00:18:59.698 "workload": "randwrite", 00:18:59.698 "status": "finished", 00:18:59.698 "queue_depth": 128, 00:18:59.698 "io_size": 4096, 00:18:59.698 "runtime": 4.037157, 00:18:59.698 "iops": 5748.1044210071595, 00:18:59.698 "mibps": 22.453532894559217, 00:18:59.698 "io_failed": 0, 00:18:59.698 "io_timeout": 0, 00:18:59.698 "avg_latency_us": 22166.01176724852, 00:18:59.698 "min_latency_us": 234.7323076923077, 00:18:59.698 "max_latency_us": 56461.78461538462 00:18:59.698 } 00:18:59.698 ], 00:18:59.698 "core_count": 1 00:18:59.698 } 00:18:59.698 23:52:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:59.960 [2024-11-26 23:52:47.890288] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:59.960 Running I/O for 4 seconds... 00:19:01.847 5771.00 IOPS, 22.54 MiB/s [2024-11-26T23:52:50.928Z] 5648.00 IOPS, 22.06 MiB/s [2024-11-26T23:52:51.916Z] 5656.00 IOPS, 22.09 MiB/s [2024-11-26T23:52:52.178Z] 5582.50 IOPS, 21.81 MiB/s 00:19:04.047 Latency(us) 00:19:04.047 [2024-11-26T23:52:52.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.047 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:04.047 Verification LBA range: start 0x0 length 0x1400000 00:19:04.047 ftl0 : 4.02 5590.80 21.84 0.00 0.00 22819.23 218.98 38111.70 00:19:04.047 [2024-11-26T23:52:52.178Z] =================================================================================================================== 00:19:04.047 [2024-11-26T23:52:52.178Z] Total : 5590.80 21.84 0.00 0.00 22819.23 0.00 38111.70 00:19:04.047 [2024-11-26 23:52:51.917231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:04.047 { 00:19:04.047 "results": [ 00:19:04.047 { 00:19:04.047 "job": "ftl0", 00:19:04.047 "core_mask": "0x1", 00:19:04.047 "workload": "verify", 00:19:04.047 "status": "finished", 00:19:04.047 "verify_range": { 00:19:04.047 "start": 0, 00:19:04.047 "length": 20971520 00:19:04.047 }, 00:19:04.047 "queue_depth": 128, 00:19:04.047 "io_size": 4096, 00:19:04.047 "runtime": 4.016955, 00:19:04.047 "iops": 5590.801988073055, 00:19:04.047 "mibps": 21.839070265910372, 00:19:04.047 "io_failed": 0, 00:19:04.047 "io_timeout": 0, 00:19:04.047 "avg_latency_us": 22819.23150085287, 00:19:04.047 "min_latency_us": 218.97846153846154, 00:19:04.047 "max_latency_us": 38111.70461538462 00:19:04.047 } 00:19:04.047 ], 00:19:04.047 "core_count": 1 00:19:04.047 } 00:19:04.047 23:52:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:04.047 [2024-11-26 23:52:52.141668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.047 [2024-11-26 23:52:52.141752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:04.047 [2024-11-26 23:52:52.141770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.047 [2024-11-26 23:52:52.141780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.047 [2024-11-26 23:52:52.141829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:04.047 [2024-11-26 23:52:52.142757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.047 [2024-11-26 23:52:52.142834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:04.047 [2024-11-26 23:52:52.142848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:19:04.047 [2024-11-26 23:52:52.142859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.047 [2024-11-26 23:52:52.146245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.047 [2024-11-26 23:52:52.146312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:04.047 [2024-11-26 23:52:52.146327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:19:04.047 [2024-11-26 23:52:52.146344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.308 [2024-11-26 23:52:52.394351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.308 [2024-11-26 23:52:52.394453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:04.308 [2024-11-26 23:52:52.394478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 247.981 ms 00:19:04.308 [2024-11-26 23:52:52.394491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.308 [2024-11-26 23:52:52.400740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.308 [2024-11-26 23:52:52.400806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:04.308 [2024-11-26 23:52:52.400819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.197 ms 00:19:04.308 [2024-11-26 23:52:52.400831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.308 [2024-11-26 23:52:52.404298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.308 [2024-11-26 23:52:52.404359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:04.308 [2024-11-26 23:52:52.404373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.400 ms 00:19:04.308 [2024-11-26 23:52:52.404385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.308 [2024-11-26 23:52:52.410995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.308 [2024-11-26 23:52:52.411058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:04.308 [2024-11-26 23:52:52.411071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.559 ms 00:19:04.308 [2024-11-26 23:52:52.411092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.308 [2024-11-26 23:52:52.411229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.309 [2024-11-26 23:52:52.411243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:04.309 [2024-11-26 23:52:52.411254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:04.309 [2024-11-26 23:52:52.411266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.309 [2024-11-26 23:52:52.414909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.309 [2024-11-26 23:52:52.414985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:04.309 [2024-11-26 23:52:52.415000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:19:04.309 [2024-11-26 23:52:52.415011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.309 [2024-11-26 23:52:52.418402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.309 [2024-11-26 23:52:52.418478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:04.309 [2024-11-26 23:52:52.418491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:19:04.309 [2024-11-26 23:52:52.418537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.309 [2024-11-26 23:52:52.420844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.309 [2024-11-26 23:52:52.420900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:04.309 [2024-11-26 23:52:52.420911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.255 ms 00:19:04.309 [2024-11-26 23:52:52.420925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.309 [2024-11-26 23:52:52.423341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.309 [2024-11-26 23:52:52.423400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:04.309 [2024-11-26 23:52:52.423411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:19:04.309 [2024-11-26 23:52:52.423421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.309 [2024-11-26 23:52:52.423466] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:04.309 [2024-11-26 23:52:52.423503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.423991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:04.309 [2024-11-26 23:52:52.424189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:04.310 [2024-11-26 23:52:52.424484] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:04.310 [2024-11-26 23:52:52.424494] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a326b5dd-6ac6-47b5-a667-fe0efef50af3 00:19:04.310 [2024-11-26 23:52:52.424515] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:04.310 [2024-11-26 23:52:52.424523] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:04.310 [2024-11-26 23:52:52.424534] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:04.310 [2024-11-26 23:52:52.424544] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:04.310 [2024-11-26 23:52:52.424557] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:04.310 [2024-11-26 23:52:52.424566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:04.310 [2024-11-26 23:52:52.424577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:04.310 [2024-11-26 23:52:52.424584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:04.310 [2024-11-26 23:52:52.424593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:04.310 [2024-11-26 23:52:52.424602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.310 [2024-11-26 23:52:52.424617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:04.310 [2024-11-26 23:52:52.424630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:19:04.310 [2024-11-26 23:52:52.424641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.310 [2024-11-26 23:52:52.427960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.310 [2024-11-26 23:52:52.428004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:04.310 [2024-11-26 23:52:52.428015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:19:04.310 [2024-11-26 23:52:52.428026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.310 [2024-11-26 23:52:52.428183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.310 [2024-11-26 23:52:52.428199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:04.310 [2024-11-26 23:52:52.428208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:04.310 [2024-11-26 23:52:52.428222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.571 [2024-11-26 23:52:52.439521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.571 [2024-11-26 23:52:52.439593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.571 [2024-11-26 23:52:52.439614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.571 [2024-11-26 23:52:52.439630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.571 [2024-11-26 23:52:52.439710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.571 [2024-11-26 23:52:52.439733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.571 [2024-11-26 23:52:52.439743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.571 [2024-11-26 23:52:52.439756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.571 [2024-11-26 23:52:52.439873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.571 [2024-11-26 23:52:52.439889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.571 [2024-11-26 23:52:52.439900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.571 [2024-11-26 23:52:52.439916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.571 [2024-11-26 23:52:52.439935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.571 [2024-11-26 23:52:52.439947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.571 [2024-11-26 23:52:52.439960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.571 [2024-11-26 23:52:52.439975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.571 [2024-11-26 23:52:52.460546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.571 [2024-11-26 23:52:52.460626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.571 [2024-11-26 23:52:52.460639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.571 [2024-11-26 23:52:52.460651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.571 [2024-11-26 23:52:52.477592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.572 [2024-11-26 23:52:52.477672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.572 [2024-11-26 23:52:52.477701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.572 [2024-11-26 23:52:52.477713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.572 [2024-11-26 23:52:52.477870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.572 [2024-11-26 23:52:52.477888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.572 [2024-11-26 23:52:52.477906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.572 [2024-11-26 23:52:52.477917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.572 [2024-11-26 23:52:52.477968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.572 [2024-11-26 23:52:52.477985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.572 [2024-11-26 23:52:52.477994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.572 [2024-11-26 23:52:52.478016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.572 [2024-11-26 23:52:52.478112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.572 [2024-11-26 23:52:52.478127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.572 [2024-11-26 23:52:52.478141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.572 [2024-11-26 23:52:52.478152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.572 [2024-11-26 23:52:52.478188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.572 [2024-11-26 23:52:52.478202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:04.572 [2024-11-26 23:52:52.478212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.572 [2024-11-26 23:52:52.478223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.572 [2024-11-26 23:52:52.478282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.572 [2024-11-26 23:52:52.478296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.572 [2024-11-26 23:52:52.478307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.572 [2024-11-26 23:52:52.478319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.572 [2024-11-26 23:52:52.478393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.572 [2024-11-26 23:52:52.478408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.572 [2024-11-26 23:52:52.478419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.572 [2024-11-26 23:52:52.478456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.572 [2024-11-26 23:52:52.478640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.917 ms, result 0 00:19:04.572 true 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 87022 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 87022 ']' 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 87022 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87022 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.572 killing process with pid 87022 00:19:04.572 Received shutdown signal, test time was about 4.000000 seconds 00:19:04.572 00:19:04.572 Latency(us) 00:19:04.572 [2024-11-26T23:52:52.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.572 [2024-11-26T23:52:52.703Z] =================================================================================================================== 00:19:04.572 [2024-11-26T23:52:52.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87022' 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 87022 00:19:04.572 23:52:52 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 87022 00:19:06.487 Remove shared memory files 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:06.487 ************************************ 00:19:06.487 END TEST ftl_bdevperf 00:19:06.487 ************************************ 00:19:06.487 00:19:06.487 real 0m23.345s 00:19:06.487 user 0m25.926s 00:19:06.487 sys 0m1.064s 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.487 23:52:54 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:06.487 23:52:54 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:06.487 23:52:54 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:06.487 23:52:54 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.487 23:52:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:06.487 ************************************ 00:19:06.487 START TEST ftl_trim 00:19:06.487 ************************************ 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:06.487 * Looking for test storage... 00:19:06.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.487 23:52:54 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:06.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.487 --rc genhtml_branch_coverage=1 00:19:06.487 --rc genhtml_function_coverage=1 00:19:06.487 --rc genhtml_legend=1 00:19:06.487 --rc geninfo_all_blocks=1 00:19:06.487 --rc geninfo_unexecuted_blocks=1 00:19:06.487 00:19:06.487 ' 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:06.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.487 --rc genhtml_branch_coverage=1 00:19:06.487 --rc genhtml_function_coverage=1 00:19:06.487 --rc genhtml_legend=1 00:19:06.487 --rc geninfo_all_blocks=1 00:19:06.487 --rc geninfo_unexecuted_blocks=1 00:19:06.487 00:19:06.487 ' 00:19:06.487 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:06.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.488 --rc genhtml_branch_coverage=1 00:19:06.488 --rc genhtml_function_coverage=1 00:19:06.488 --rc genhtml_legend=1 00:19:06.488 --rc geninfo_all_blocks=1 00:19:06.488 --rc geninfo_unexecuted_blocks=1 00:19:06.488 00:19:06.488 ' 00:19:06.488 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:06.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.488 --rc genhtml_branch_coverage=1 00:19:06.488 --rc genhtml_function_coverage=1 00:19:06.488 --rc genhtml_legend=1 00:19:06.488 --rc geninfo_all_blocks=1 00:19:06.488 --rc geninfo_unexecuted_blocks=1 00:19:06.488 00:19:06.488 ' 00:19:06.488 23:52:54 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:06.488 23:52:54 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:06.488 23:52:54 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.488 23:52:54 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.488 23:52:54 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87374 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87374 00:19:06.748 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87374 ']' 00:19:06.748 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.748 23:52:54 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:06.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.748 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.748 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.748 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.748 23:52:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:06.748 [2024-11-26 23:52:54.719528] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:19:06.748 [2024-11-26 23:52:54.719881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87374 ] 00:19:06.748 [2024-11-26 23:52:54.869186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:07.009 [2024-11-26 23:52:54.915230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.009 [2024-11-26 23:52:54.915372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.009 [2024-11-26 23:52:54.915419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.581 23:52:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.581 23:52:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:07.581 23:52:55 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:07.581 23:52:55 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:07.581 23:52:55 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:07.581 23:52:55 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:07.581 23:52:55 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:07.581 23:52:55 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:07.842 23:52:55 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:07.842 23:52:55 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:07.842 23:52:55 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:07.842 23:52:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:07.842 23:52:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.842 23:52:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:07.842 23:52:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:07.842 23:52:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:08.102 23:52:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:08.102 { 00:19:08.102 "name": "nvme0n1", 00:19:08.102 "aliases": [ 00:19:08.102 "4b137255-6075-4911-a95f-9f3af4b99303" 00:19:08.102 ], 00:19:08.102 "product_name": "NVMe disk", 00:19:08.102 "block_size": 4096, 00:19:08.102 "num_blocks": 1310720, 00:19:08.102 "uuid": "4b137255-6075-4911-a95f-9f3af4b99303", 00:19:08.102 "numa_id": -1, 00:19:08.102 "assigned_rate_limits": { 00:19:08.102 "rw_ios_per_sec": 0, 00:19:08.102 "rw_mbytes_per_sec": 0, 00:19:08.102 "r_mbytes_per_sec": 0, 00:19:08.102 "w_mbytes_per_sec": 0 00:19:08.102 }, 00:19:08.102 "claimed": true, 00:19:08.102 "claim_type": "read_many_write_one", 00:19:08.102 "zoned": false, 00:19:08.102 "supported_io_types": { 00:19:08.102 "read": true, 00:19:08.102 "write": true, 00:19:08.102 "unmap": true, 00:19:08.102 "flush": true, 00:19:08.102 "reset": true, 00:19:08.102 "nvme_admin": true, 00:19:08.102 "nvme_io": true, 00:19:08.102 "nvme_io_md": false, 00:19:08.102 "write_zeroes": true, 00:19:08.102 "zcopy": false, 00:19:08.102 "get_zone_info": false, 00:19:08.102 "zone_management": false, 00:19:08.102 "zone_append": false, 00:19:08.102 "compare": true, 00:19:08.102 "compare_and_write": false, 00:19:08.102 "abort": true, 00:19:08.103 "seek_hole": false, 00:19:08.103 "seek_data": false, 00:19:08.103 "copy": true, 00:19:08.103 "nvme_iov_md": false 00:19:08.103 }, 00:19:08.103 "driver_specific": { 00:19:08.103 "nvme": [ 00:19:08.103 { 00:19:08.103 "pci_address": "0000:00:11.0", 00:19:08.103 "trid": { 00:19:08.103 "trtype": "PCIe", 00:19:08.103 "traddr": "0000:00:11.0" 00:19:08.103 }, 00:19:08.103 "ctrlr_data": { 00:19:08.103 "cntlid": 0, 00:19:08.103 "vendor_id": "0x1b36", 00:19:08.103 "model_number": "QEMU NVMe Ctrl", 00:19:08.103 "serial_number": "12341", 00:19:08.103 "firmware_revision": "8.0.0", 00:19:08.103 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:08.103 "oacs": { 00:19:08.103 "security": 0, 00:19:08.103 "format": 1, 00:19:08.103 "firmware": 0, 00:19:08.103 "ns_manage": 1 00:19:08.103 }, 00:19:08.103 "multi_ctrlr": false, 00:19:08.103 "ana_reporting": false 00:19:08.103 }, 00:19:08.103 "vs": { 00:19:08.103 "nvme_version": "1.4" 00:19:08.103 }, 00:19:08.103 "ns_data": { 00:19:08.103 "id": 1, 00:19:08.103 "can_share": false 00:19:08.103 } 00:19:08.103 } 00:19:08.103 ], 00:19:08.103 "mp_policy": "active_passive" 00:19:08.103 } 00:19:08.103 } 00:19:08.103 ]' 00:19:08.103 23:52:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:08.103 23:52:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:08.103 23:52:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:08.103 23:52:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:08.103 23:52:56 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:08.103 23:52:56 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:08.103 23:52:56 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:08.103 23:52:56 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:08.103 23:52:56 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:08.103 23:52:56 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:08.103 23:52:56 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:08.363 23:52:56 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=e9770fbd-b1a3-4354-96cc-66c8a21fd5cb 00:19:08.363 23:52:56 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:08.363 23:52:56 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e9770fbd-b1a3-4354-96cc-66c8a21fd5cb 00:19:08.624 23:52:56 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:08.885 23:52:56 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=6560b0fa-a51f-4ce5-9686-0a6d5b7b2b34 00:19:08.885 23:52:56 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6560b0fa-a51f-4ce5-9686-0a6d5b7b2b34 00:19:09.146 23:52:57 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.146 23:52:57 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.146 23:52:57 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:09.146 23:52:57 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:09.146 23:52:57 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.146 23:52:57 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:09.146 23:52:57 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.146 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.146 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:09.146 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:09.146 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:09.146 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.146 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:09.146 { 00:19:09.146 "name": "2a8de157-459e-499d-88f5-4c44408afe8c", 00:19:09.146 "aliases": [ 00:19:09.146 "lvs/nvme0n1p0" 00:19:09.146 ], 00:19:09.146 "product_name": "Logical Volume", 00:19:09.146 "block_size": 4096, 00:19:09.146 "num_blocks": 26476544, 00:19:09.146 "uuid": "2a8de157-459e-499d-88f5-4c44408afe8c", 00:19:09.146 "assigned_rate_limits": { 00:19:09.146 "rw_ios_per_sec": 0, 00:19:09.146 "rw_mbytes_per_sec": 0, 00:19:09.146 "r_mbytes_per_sec": 0, 00:19:09.146 "w_mbytes_per_sec": 0 00:19:09.146 }, 00:19:09.146 "claimed": false, 00:19:09.146 "zoned": false, 00:19:09.146 "supported_io_types": { 00:19:09.146 "read": true, 00:19:09.146 "write": true, 00:19:09.146 "unmap": true, 00:19:09.146 "flush": false, 00:19:09.146 "reset": true, 00:19:09.146 "nvme_admin": false, 00:19:09.146 "nvme_io": false, 00:19:09.146 "nvme_io_md": false, 00:19:09.146 "write_zeroes": true, 00:19:09.146 "zcopy": false, 00:19:09.146 "get_zone_info": false, 00:19:09.146 "zone_management": false, 00:19:09.146 "zone_append": false, 00:19:09.146 "compare": false, 00:19:09.146 "compare_and_write": false, 00:19:09.146 "abort": false, 00:19:09.146 "seek_hole": true, 00:19:09.146 "seek_data": true, 00:19:09.146 "copy": false, 00:19:09.146 "nvme_iov_md": false 00:19:09.146 }, 00:19:09.146 "driver_specific": { 00:19:09.146 "lvol": { 00:19:09.146 "lvol_store_uuid": "6560b0fa-a51f-4ce5-9686-0a6d5b7b2b34", 00:19:09.146 "base_bdev": "nvme0n1", 00:19:09.146 "thin_provision": true, 00:19:09.146 "num_allocated_clusters": 0, 00:19:09.146 "snapshot": false, 00:19:09.147 "clone": false, 00:19:09.147 "esnap_clone": false 00:19:09.147 } 00:19:09.147 } 00:19:09.147 } 00:19:09.147 ]' 00:19:09.147 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:09.406 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:09.407 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:09.407 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:09.407 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:09.407 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:09.407 23:52:57 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:09.407 23:52:57 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:09.407 23:52:57 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:09.668 23:52:57 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:09.668 23:52:57 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:09.668 23:52:57 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.668 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.668 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:09.668 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:09.668 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:09.668 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.668 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:09.668 { 00:19:09.668 "name": "2a8de157-459e-499d-88f5-4c44408afe8c", 00:19:09.668 "aliases": [ 00:19:09.668 "lvs/nvme0n1p0" 00:19:09.668 ], 00:19:09.668 "product_name": "Logical Volume", 00:19:09.668 "block_size": 4096, 00:19:09.668 "num_blocks": 26476544, 00:19:09.668 "uuid": "2a8de157-459e-499d-88f5-4c44408afe8c", 00:19:09.668 "assigned_rate_limits": { 00:19:09.668 "rw_ios_per_sec": 0, 00:19:09.668 "rw_mbytes_per_sec": 0, 00:19:09.668 "r_mbytes_per_sec": 0, 00:19:09.668 "w_mbytes_per_sec": 0 00:19:09.668 }, 00:19:09.668 "claimed": false, 00:19:09.668 "zoned": false, 00:19:09.668 "supported_io_types": { 00:19:09.668 "read": true, 00:19:09.668 "write": true, 00:19:09.668 "unmap": true, 00:19:09.668 "flush": false, 00:19:09.668 "reset": true, 00:19:09.668 "nvme_admin": false, 00:19:09.668 "nvme_io": false, 00:19:09.668 "nvme_io_md": false, 00:19:09.668 "write_zeroes": true, 00:19:09.668 "zcopy": false, 00:19:09.668 "get_zone_info": false, 00:19:09.668 "zone_management": false, 00:19:09.668 "zone_append": false, 00:19:09.668 "compare": false, 00:19:09.668 "compare_and_write": false, 00:19:09.668 "abort": false, 00:19:09.668 "seek_hole": true, 00:19:09.668 "seek_data": true, 00:19:09.668 "copy": false, 00:19:09.668 "nvme_iov_md": false 00:19:09.668 }, 00:19:09.668 "driver_specific": { 00:19:09.668 "lvol": { 00:19:09.668 "lvol_store_uuid": "6560b0fa-a51f-4ce5-9686-0a6d5b7b2b34", 00:19:09.668 "base_bdev": "nvme0n1", 00:19:09.668 "thin_provision": true, 00:19:09.668 "num_allocated_clusters": 0, 00:19:09.668 "snapshot": false, 00:19:09.668 "clone": false, 00:19:09.668 "esnap_clone": false 00:19:09.668 } 00:19:09.668 } 00:19:09.668 } 00:19:09.668 ]' 00:19:09.668 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:09.928 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:09.928 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:09.928 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:09.928 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:09.928 23:52:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:09.928 23:52:57 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:09.928 23:52:57 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:09.928 23:52:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:09.928 23:52:58 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:09.928 23:52:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.928 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2a8de157-459e-499d-88f5-4c44408afe8c 00:19:09.928 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:09.928 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:09.928 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:09.928 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a8de157-459e-499d-88f5-4c44408afe8c 00:19:10.187 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:10.187 { 00:19:10.187 "name": "2a8de157-459e-499d-88f5-4c44408afe8c", 00:19:10.187 "aliases": [ 00:19:10.187 "lvs/nvme0n1p0" 00:19:10.187 ], 00:19:10.187 "product_name": "Logical Volume", 00:19:10.187 "block_size": 4096, 00:19:10.187 "num_blocks": 26476544, 00:19:10.187 "uuid": "2a8de157-459e-499d-88f5-4c44408afe8c", 00:19:10.187 "assigned_rate_limits": { 00:19:10.187 "rw_ios_per_sec": 0, 00:19:10.187 "rw_mbytes_per_sec": 0, 00:19:10.187 "r_mbytes_per_sec": 0, 00:19:10.188 "w_mbytes_per_sec": 0 00:19:10.188 }, 00:19:10.188 "claimed": false, 00:19:10.188 "zoned": false, 00:19:10.188 "supported_io_types": { 00:19:10.188 "read": true, 00:19:10.188 "write": true, 00:19:10.188 "unmap": true, 00:19:10.188 "flush": false, 00:19:10.188 "reset": true, 00:19:10.188 "nvme_admin": false, 00:19:10.188 "nvme_io": false, 00:19:10.188 "nvme_io_md": false, 00:19:10.188 "write_zeroes": true, 00:19:10.188 "zcopy": false, 00:19:10.188 "get_zone_info": false, 00:19:10.188 "zone_management": false, 00:19:10.188 "zone_append": false, 00:19:10.188 "compare": false, 00:19:10.188 "compare_and_write": false, 00:19:10.188 "abort": false, 00:19:10.188 "seek_hole": true, 00:19:10.188 "seek_data": true, 00:19:10.188 "copy": false, 00:19:10.188 "nvme_iov_md": false 00:19:10.188 }, 00:19:10.188 "driver_specific": { 00:19:10.188 "lvol": { 00:19:10.188 "lvol_store_uuid": "6560b0fa-a51f-4ce5-9686-0a6d5b7b2b34", 00:19:10.188 "base_bdev": "nvme0n1", 00:19:10.188 "thin_provision": true, 00:19:10.188 "num_allocated_clusters": 0, 00:19:10.188 "snapshot": false, 00:19:10.188 "clone": false, 00:19:10.188 "esnap_clone": false 00:19:10.188 } 00:19:10.188 } 00:19:10.188 } 00:19:10.188 ]' 00:19:10.188 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:10.188 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:10.188 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:10.447 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:10.447 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:10.447 23:52:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:10.447 23:52:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:10.447 23:52:58 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2a8de157-459e-499d-88f5-4c44408afe8c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:10.447 [2024-11-26 23:52:58.511199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.511356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:10.447 [2024-11-26 23:52:58.511377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:10.447 [2024-11-26 23:52:58.511391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.513983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.514021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.447 [2024-11-26 23:52:58.514033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:19:10.447 [2024-11-26 23:52:58.514045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.514150] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:10.447 [2024-11-26 23:52:58.514416] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:10.447 [2024-11-26 23:52:58.514467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.514480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.447 [2024-11-26 23:52:58.514491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:19:10.447 [2024-11-26 23:52:58.514500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.514635] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6f06a1c5-8be6-4e88-9b85-7277c6542241 00:19:10.447 [2024-11-26 23:52:58.516025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.516049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:10.447 [2024-11-26 23:52:58.516059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:10.447 [2024-11-26 23:52:58.516067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.523243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.523274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.447 [2024-11-26 23:52:58.523285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.095 ms 00:19:10.447 [2024-11-26 23:52:58.523305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.523422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.523432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.447 [2024-11-26 23:52:58.523445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:10.447 [2024-11-26 23:52:58.523453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.523504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.523512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:10.447 [2024-11-26 23:52:58.523524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:10.447 [2024-11-26 23:52:58.523532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.523580] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:10.447 [2024-11-26 23:52:58.525366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.525484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.447 [2024-11-26 23:52:58.525500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.792 ms 00:19:10.447 [2024-11-26 23:52:58.525510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.525578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.525589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:10.447 [2024-11-26 23:52:58.525597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:10.447 [2024-11-26 23:52:58.525608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.525643] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:10.447 [2024-11-26 23:52:58.525837] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:10.447 [2024-11-26 23:52:58.525851] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:10.447 [2024-11-26 23:52:58.525864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:10.447 [2024-11-26 23:52:58.525875] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:10.447 [2024-11-26 23:52:58.525886] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:10.447 [2024-11-26 23:52:58.525894] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:10.447 [2024-11-26 23:52:58.525903] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:10.447 [2024-11-26 23:52:58.525910] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:10.447 [2024-11-26 23:52:58.525923] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:10.447 [2024-11-26 23:52:58.525931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.525940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:10.447 [2024-11-26 23:52:58.525948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:19:10.447 [2024-11-26 23:52:58.525957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.526065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.447 [2024-11-26 23:52:58.526078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:10.447 [2024-11-26 23:52:58.526087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:10.447 [2024-11-26 23:52:58.526097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.447 [2024-11-26 23:52:58.526217] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:10.447 [2024-11-26 23:52:58.526229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:10.447 [2024-11-26 23:52:58.526238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.447 [2024-11-26 23:52:58.526248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.447 [2024-11-26 23:52:58.526267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:10.447 [2024-11-26 23:52:58.526276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:10.447 [2024-11-26 23:52:58.526284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:10.447 [2024-11-26 23:52:58.526293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:10.447 [2024-11-26 23:52:58.526301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:10.447 [2024-11-26 23:52:58.526310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.447 [2024-11-26 23:52:58.526318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:10.447 [2024-11-26 23:52:58.526327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:10.447 [2024-11-26 23:52:58.526336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.447 [2024-11-26 23:52:58.526350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:10.447 [2024-11-26 23:52:58.526358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:10.447 [2024-11-26 23:52:58.526368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.447 [2024-11-26 23:52:58.526375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:10.447 [2024-11-26 23:52:58.526384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:10.447 [2024-11-26 23:52:58.526392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.447 [2024-11-26 23:52:58.526401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:10.447 [2024-11-26 23:52:58.526409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:10.447 [2024-11-26 23:52:58.526431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.448 [2024-11-26 23:52:58.526439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:10.448 [2024-11-26 23:52:58.526448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:10.448 [2024-11-26 23:52:58.526454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.448 [2024-11-26 23:52:58.526463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:10.448 [2024-11-26 23:52:58.526470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:10.448 [2024-11-26 23:52:58.526477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.448 [2024-11-26 23:52:58.526484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:10.448 [2024-11-26 23:52:58.526496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:10.448 [2024-11-26 23:52:58.526503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.448 [2024-11-26 23:52:58.526512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:10.448 [2024-11-26 23:52:58.526518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:10.448 [2024-11-26 23:52:58.526527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.448 [2024-11-26 23:52:58.526533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:10.448 [2024-11-26 23:52:58.526541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:10.448 [2024-11-26 23:52:58.526548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.448 [2024-11-26 23:52:58.526558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:10.448 [2024-11-26 23:52:58.526564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:10.448 [2024-11-26 23:52:58.526573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.448 [2024-11-26 23:52:58.526580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:10.448 [2024-11-26 23:52:58.526588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:10.448 [2024-11-26 23:52:58.526594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.448 [2024-11-26 23:52:58.526603] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:10.448 [2024-11-26 23:52:58.526610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:10.448 [2024-11-26 23:52:58.526621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.448 [2024-11-26 23:52:58.526628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.448 [2024-11-26 23:52:58.526639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:10.448 [2024-11-26 23:52:58.526645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:10.448 [2024-11-26 23:52:58.526653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:10.448 [2024-11-26 23:52:58.526661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:10.448 [2024-11-26 23:52:58.526669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:10.448 [2024-11-26 23:52:58.526675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:10.448 [2024-11-26 23:52:58.526686] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:10.448 [2024-11-26 23:52:58.526695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.448 [2024-11-26 23:52:58.526706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:10.448 [2024-11-26 23:52:58.526713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:10.448 [2024-11-26 23:52:58.526722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:10.448 [2024-11-26 23:52:58.526729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:10.448 [2024-11-26 23:52:58.526738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:10.448 [2024-11-26 23:52:58.526745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:10.448 [2024-11-26 23:52:58.526759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:10.448 [2024-11-26 23:52:58.526766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:10.448 [2024-11-26 23:52:58.526775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:10.448 [2024-11-26 23:52:58.526782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:10.448 [2024-11-26 23:52:58.526802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:10.448 [2024-11-26 23:52:58.526810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:10.448 [2024-11-26 23:52:58.526818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:10.448 [2024-11-26 23:52:58.526826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:10.448 [2024-11-26 23:52:58.526835] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:10.448 [2024-11-26 23:52:58.526845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.448 [2024-11-26 23:52:58.526854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:10.448 [2024-11-26 23:52:58.526861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:10.448 [2024-11-26 23:52:58.526870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:10.448 [2024-11-26 23:52:58.526878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:10.448 [2024-11-26 23:52:58.526887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.448 [2024-11-26 23:52:58.526895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:10.448 [2024-11-26 23:52:58.526913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:19:10.448 [2024-11-26 23:52:58.526930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.448 [2024-11-26 23:52:58.527014] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:10.448 [2024-11-26 23:52:58.527023] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:12.976 [2024-11-26 23:53:00.775665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.775737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:12.976 [2024-11-26 23:53:00.775759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2248.636 ms 00:19:12.976 [2024-11-26 23:53:00.775768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.786913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.786960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.976 [2024-11-26 23:53:00.786976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.026 ms 00:19:12.976 [2024-11-26 23:53:00.786985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.787124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.787135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.976 [2024-11-26 23:53:00.787149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:12.976 [2024-11-26 23:53:00.787157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.807338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.807384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.976 [2024-11-26 23:53:00.807398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.144 ms 00:19:12.976 [2024-11-26 23:53:00.807407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.807490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.807504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.976 [2024-11-26 23:53:00.807515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.976 [2024-11-26 23:53:00.807523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.807982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.808000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.976 [2024-11-26 23:53:00.808012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:19:12.976 [2024-11-26 23:53:00.808034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.808183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.808194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.976 [2024-11-26 23:53:00.808207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:12.976 [2024-11-26 23:53:00.808216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.815474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.815510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.976 [2024-11-26 23:53:00.815534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.223 ms 00:19:12.976 [2024-11-26 23:53:00.815543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.825109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:12.976 [2024-11-26 23:53:00.842444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.842481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.976 [2024-11-26 23:53:00.842492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.790 ms 00:19:12.976 [2024-11-26 23:53:00.842502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.894079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.894168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:12.976 [2024-11-26 23:53:00.894211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.475 ms 00:19:12.976 [2024-11-26 23:53:00.894241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.894631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.894667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.976 [2024-11-26 23:53:00.894685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:19:12.976 [2024-11-26 23:53:00.894704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.899837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.900080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:12.976 [2024-11-26 23:53:00.900113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.049 ms 00:19:12.976 [2024-11-26 23:53:00.900132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.904393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.904452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:12.976 [2024-11-26 23:53:00.904472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.188 ms 00:19:12.976 [2024-11-26 23:53:00.904490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.905133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.905176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.976 [2024-11-26 23:53:00.905197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:19:12.976 [2024-11-26 23:53:00.905221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.930244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.930281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:12.976 [2024-11-26 23:53:00.930293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.970 ms 00:19:12.976 [2024-11-26 23:53:00.930317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.934316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.934349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:12.976 [2024-11-26 23:53:00.934359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.920 ms 00:19:12.976 [2024-11-26 23:53:00.934369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.937459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.937492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:12.976 [2024-11-26 23:53:00.937501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:19:12.976 [2024-11-26 23:53:00.937510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.941245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.976 [2024-11-26 23:53:00.941289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.976 [2024-11-26 23:53:00.941301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:19:12.976 [2024-11-26 23:53:00.941313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.976 [2024-11-26 23:53:00.941380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.977 [2024-11-26 23:53:00.941393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.977 [2024-11-26 23:53:00.941402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:12.977 [2024-11-26 23:53:00.941411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.977 [2024-11-26 23:53:00.941496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.977 [2024-11-26 23:53:00.941507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.977 [2024-11-26 23:53:00.941516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:12.977 [2024-11-26 23:53:00.941525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.977 [2024-11-26 23:53:00.942630] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.977 [2024-11-26 23:53:00.943638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2431.014 ms, result 0 00:19:12.977 [2024-11-26 23:53:00.944417] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.977 { 00:19:12.977 "name": "ftl0", 00:19:12.977 "uuid": "6f06a1c5-8be6-4e88-9b85-7277c6542241" 00:19:12.977 } 00:19:12.977 23:53:00 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:12.977 23:53:00 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:12.977 23:53:00 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:12.977 23:53:00 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:12.977 23:53:00 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:12.977 23:53:00 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:12.977 23:53:00 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:13.234 23:53:01 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:13.234 [ 00:19:13.234 { 00:19:13.234 "name": "ftl0", 00:19:13.234 "aliases": [ 00:19:13.234 "6f06a1c5-8be6-4e88-9b85-7277c6542241" 00:19:13.234 ], 00:19:13.234 "product_name": "FTL disk", 00:19:13.234 "block_size": 4096, 00:19:13.234 "num_blocks": 23592960, 00:19:13.234 "uuid": "6f06a1c5-8be6-4e88-9b85-7277c6542241", 00:19:13.234 "assigned_rate_limits": { 00:19:13.234 "rw_ios_per_sec": 0, 00:19:13.234 "rw_mbytes_per_sec": 0, 00:19:13.234 "r_mbytes_per_sec": 0, 00:19:13.234 "w_mbytes_per_sec": 0 00:19:13.234 }, 00:19:13.234 "claimed": false, 00:19:13.234 "zoned": false, 00:19:13.234 "supported_io_types": { 00:19:13.234 "read": true, 00:19:13.234 "write": true, 00:19:13.234 "unmap": true, 00:19:13.234 "flush": true, 00:19:13.234 "reset": false, 00:19:13.234 "nvme_admin": false, 00:19:13.235 "nvme_io": false, 00:19:13.235 "nvme_io_md": false, 00:19:13.235 "write_zeroes": true, 00:19:13.235 "zcopy": false, 00:19:13.235 "get_zone_info": false, 00:19:13.235 "zone_management": false, 00:19:13.235 "zone_append": false, 00:19:13.235 "compare": false, 00:19:13.235 "compare_and_write": false, 00:19:13.235 "abort": false, 00:19:13.235 "seek_hole": false, 00:19:13.235 "seek_data": false, 00:19:13.235 "copy": false, 00:19:13.235 "nvme_iov_md": false 00:19:13.235 }, 00:19:13.235 "driver_specific": { 00:19:13.235 "ftl": { 00:19:13.235 "base_bdev": "2a8de157-459e-499d-88f5-4c44408afe8c", 00:19:13.235 "cache": "nvc0n1p0" 00:19:13.235 } 00:19:13.235 } 00:19:13.235 } 00:19:13.235 ] 00:19:13.235 23:53:01 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:13.235 23:53:01 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:13.235 23:53:01 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:13.492 23:53:01 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:13.492 23:53:01 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:13.750 23:53:01 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:13.750 { 00:19:13.750 "name": "ftl0", 00:19:13.750 "aliases": [ 00:19:13.750 "6f06a1c5-8be6-4e88-9b85-7277c6542241" 00:19:13.750 ], 00:19:13.750 "product_name": "FTL disk", 00:19:13.750 "block_size": 4096, 00:19:13.750 "num_blocks": 23592960, 00:19:13.750 "uuid": "6f06a1c5-8be6-4e88-9b85-7277c6542241", 00:19:13.750 "assigned_rate_limits": { 00:19:13.750 "rw_ios_per_sec": 0, 00:19:13.750 "rw_mbytes_per_sec": 0, 00:19:13.751 "r_mbytes_per_sec": 0, 00:19:13.751 "w_mbytes_per_sec": 0 00:19:13.751 }, 00:19:13.751 "claimed": false, 00:19:13.751 "zoned": false, 00:19:13.751 "supported_io_types": { 00:19:13.751 "read": true, 00:19:13.751 "write": true, 00:19:13.751 "unmap": true, 00:19:13.751 "flush": true, 00:19:13.751 "reset": false, 00:19:13.751 "nvme_admin": false, 00:19:13.751 "nvme_io": false, 00:19:13.751 "nvme_io_md": false, 00:19:13.751 "write_zeroes": true, 00:19:13.751 "zcopy": false, 00:19:13.751 "get_zone_info": false, 00:19:13.751 "zone_management": false, 00:19:13.751 "zone_append": false, 00:19:13.751 "compare": false, 00:19:13.751 "compare_and_write": false, 00:19:13.751 "abort": false, 00:19:13.751 "seek_hole": false, 00:19:13.751 "seek_data": false, 00:19:13.751 "copy": false, 00:19:13.751 "nvme_iov_md": false 00:19:13.751 }, 00:19:13.751 "driver_specific": { 00:19:13.751 "ftl": { 00:19:13.751 "base_bdev": "2a8de157-459e-499d-88f5-4c44408afe8c", 00:19:13.751 "cache": "nvc0n1p0" 00:19:13.751 } 00:19:13.751 } 00:19:13.751 } 00:19:13.751 ]' 00:19:13.751 23:53:01 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:13.751 23:53:01 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:13.751 23:53:01 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:14.011 [2024-11-26 23:53:01.992584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.011 [2024-11-26 23:53:01.992631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.011 [2024-11-26 23:53:01.992647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.011 [2024-11-26 23:53:01.992656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.011 [2024-11-26 23:53:01.992705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:14.011 [2024-11-26 23:53:01.993280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.011 [2024-11-26 23:53:01.993308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.011 [2024-11-26 23:53:01.993333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:19:14.011 [2024-11-26 23:53:01.993347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.011 [2024-11-26 23:53:01.993964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.011 [2024-11-26 23:53:01.993985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.011 [2024-11-26 23:53:01.993994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:19:14.011 [2024-11-26 23:53:01.994004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.011 [2024-11-26 23:53:01.997645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.011 [2024-11-26 23:53:01.997669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.011 [2024-11-26 23:53:01.997679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:19:14.011 [2024-11-26 23:53:01.997702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.011 [2024-11-26 23:53:02.004701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.011 [2024-11-26 23:53:02.005602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:14.011 [2024-11-26 23:53:02.005622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.927 ms 00:19:14.011 [2024-11-26 23:53:02.005636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.011 [2024-11-26 23:53:02.007280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.011 [2024-11-26 23:53:02.007312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:14.011 [2024-11-26 23:53:02.007321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:19:14.011 [2024-11-26 23:53:02.007330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.011 [2024-11-26 23:53:02.010998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.011 [2024-11-26 23:53:02.011034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:14.011 [2024-11-26 23:53:02.011044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:19:14.011 [2024-11-26 23:53:02.011060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.011 [2024-11-26 23:53:02.011243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.011 [2024-11-26 23:53:02.011258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:14.011 [2024-11-26 23:53:02.011267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:14.012 [2024-11-26 23:53:02.011276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.012 [2024-11-26 23:53:02.012802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.012 [2024-11-26 23:53:02.012833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:14.012 [2024-11-26 23:53:02.012841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:19:14.012 [2024-11-26 23:53:02.012853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.012 [2024-11-26 23:53:02.014215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.012 [2024-11-26 23:53:02.014247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:14.012 [2024-11-26 23:53:02.014256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:19:14.012 [2024-11-26 23:53:02.014264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.012 [2024-11-26 23:53:02.015366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.012 [2024-11-26 23:53:02.015399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:14.012 [2024-11-26 23:53:02.015408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:19:14.012 [2024-11-26 23:53:02.015416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.012 [2024-11-26 23:53:02.016375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.012 [2024-11-26 23:53:02.016494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:14.012 [2024-11-26 23:53:02.016508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:19:14.012 [2024-11-26 23:53:02.016516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.012 [2024-11-26 23:53:02.016564] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:14.012 [2024-11-26 23:53:02.016579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.016992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:14.012 [2024-11-26 23:53:02.017068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:14.013 [2024-11-26 23:53:02.017482] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:14.013 [2024-11-26 23:53:02.017490] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f06a1c5-8be6-4e88-9b85-7277c6542241 00:19:14.013 [2024-11-26 23:53:02.017500] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:14.013 [2024-11-26 23:53:02.017509] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:14.013 [2024-11-26 23:53:02.017518] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:14.013 [2024-11-26 23:53:02.017525] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:14.013 [2024-11-26 23:53:02.017535] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:14.013 [2024-11-26 23:53:02.017543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:14.013 [2024-11-26 23:53:02.017552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:14.013 [2024-11-26 23:53:02.017558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:14.013 [2024-11-26 23:53:02.017566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:14.013 [2024-11-26 23:53:02.017573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.013 [2024-11-26 23:53:02.017582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:14.013 [2024-11-26 23:53:02.017590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:19:14.013 [2024-11-26 23:53:02.017601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.013 [2024-11-26 23:53:02.019553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.013 [2024-11-26 23:53:02.019573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:14.013 [2024-11-26 23:53:02.019582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.917 ms 00:19:14.013 [2024-11-26 23:53:02.019592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.013 [2024-11-26 23:53:02.019722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.013 [2024-11-26 23:53:02.019734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:14.013 [2024-11-26 23:53:02.019742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:14.013 [2024-11-26 23:53:02.019752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.013 [2024-11-26 23:53:02.026614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.013 [2024-11-26 23:53:02.026720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.013 [2024-11-26 23:53:02.026783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.013 [2024-11-26 23:53:02.026821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.013 [2024-11-26 23:53:02.026944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.013 [2024-11-26 23:53:02.027008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.013 [2024-11-26 23:53:02.027053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.013 [2024-11-26 23:53:02.027113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.013 [2024-11-26 23:53:02.027190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.027228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.014 [2024-11-26 23:53:02.027249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.027270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.027312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.027371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.014 [2024-11-26 23:53:02.027394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.027414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.039444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.039563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.014 [2024-11-26 23:53:02.039613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.039639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.049358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.049473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.014 [2024-11-26 23:53:02.049525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.049553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.049676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.049748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.014 [2024-11-26 23:53:02.049814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.049859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.049955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.049993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.014 [2024-11-26 23:53:02.050058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.050084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.050262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.050325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.014 [2024-11-26 23:53:02.050372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.050416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.050502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.050545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.014 [2024-11-26 23:53:02.050578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.050601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.050665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.050689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.014 [2024-11-26 23:53:02.050711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.050732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.050817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.014 [2024-11-26 23:53:02.050845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.014 [2024-11-26 23:53:02.050865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.014 [2024-11-26 23:53:02.050886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.014 [2024-11-26 23:53:02.051151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.521 ms, result 0 00:19:14.014 true 00:19:14.014 23:53:02 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87374 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87374 ']' 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87374 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87374 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87374' 00:19:14.014 killing process with pid 87374 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87374 00:19:14.014 23:53:02 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87374 00:19:19.276 23:53:07 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:20.212 65536+0 records in 00:19:20.212 65536+0 records out 00:19:20.212 268435456 bytes (268 MB, 256 MiB) copied, 0.81381 s, 330 MB/s 00:19:20.212 23:53:08 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.212 [2024-11-26 23:53:08.127342] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:19:20.212 [2024-11-26 23:53:08.127534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87534 ] 00:19:20.212 [2024-11-26 23:53:08.269062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.212 [2024-11-26 23:53:08.298519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.473 [2024-11-26 23:53:08.439517] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.473 [2024-11-26 23:53:08.439619] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.736 [2024-11-26 23:53:08.602218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.602282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:20.736 [2024-11-26 23:53:08.602304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:20.736 [2024-11-26 23:53:08.602313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.604985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.605038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.736 [2024-11-26 23:53:08.605049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:19:20.736 [2024-11-26 23:53:08.605057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.605177] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:20.736 [2024-11-26 23:53:08.605461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:20.736 [2024-11-26 23:53:08.605484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.605492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.736 [2024-11-26 23:53:08.605502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:19:20.736 [2024-11-26 23:53:08.605509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.608837] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:20.736 [2024-11-26 23:53:08.614767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.614881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:20.736 [2024-11-26 23:53:08.614920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.936 ms 00:19:20.736 [2024-11-26 23:53:08.614941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.615090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.615119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:20.736 [2024-11-26 23:53:08.615142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:20.736 [2024-11-26 23:53:08.615162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.626027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.626070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.736 [2024-11-26 23:53:08.626082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.693 ms 00:19:20.736 [2024-11-26 23:53:08.626094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.626231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.626244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.736 [2024-11-26 23:53:08.626257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:20.736 [2024-11-26 23:53:08.626268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.626297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.626306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:20.736 [2024-11-26 23:53:08.626314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:20.736 [2024-11-26 23:53:08.626327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.626353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:20.736 [2024-11-26 23:53:08.628760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.628977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.736 [2024-11-26 23:53:08.628995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:19:20.736 [2024-11-26 23:53:08.629010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.629062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.736 [2024-11-26 23:53:08.629071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:20.736 [2024-11-26 23:53:08.629080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:20.736 [2024-11-26 23:53:08.629087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.736 [2024-11-26 23:53:08.629107] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:20.736 [2024-11-26 23:53:08.629129] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:20.736 [2024-11-26 23:53:08.629174] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:20.736 [2024-11-26 23:53:08.629193] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:20.736 [2024-11-26 23:53:08.629304] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:20.736 [2024-11-26 23:53:08.629315] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:20.736 [2024-11-26 23:53:08.629327] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:20.736 [2024-11-26 23:53:08.629342] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:20.736 [2024-11-26 23:53:08.629351] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:20.736 [2024-11-26 23:53:08.629363] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:20.737 [2024-11-26 23:53:08.629371] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:20.737 [2024-11-26 23:53:08.629379] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:20.737 [2024-11-26 23:53:08.629390] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:20.737 [2024-11-26 23:53:08.629401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.737 [2024-11-26 23:53:08.629409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:20.737 [2024-11-26 23:53:08.629420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:19:20.737 [2024-11-26 23:53:08.629428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.737 [2024-11-26 23:53:08.629517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.737 [2024-11-26 23:53:08.629527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:20.737 [2024-11-26 23:53:08.629539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:20.737 [2024-11-26 23:53:08.629547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.737 [2024-11-26 23:53:08.629648] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:20.737 [2024-11-26 23:53:08.629663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:20.737 [2024-11-26 23:53:08.629673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.737 [2024-11-26 23:53:08.629682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:20.737 [2024-11-26 23:53:08.629726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:20.737 [2024-11-26 23:53:08.629745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:20.737 [2024-11-26 23:53:08.629755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.737 [2024-11-26 23:53:08.629771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:20.737 [2024-11-26 23:53:08.629783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:20.737 [2024-11-26 23:53:08.629808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.737 [2024-11-26 23:53:08.629817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:20.737 [2024-11-26 23:53:08.629826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:20.737 [2024-11-26 23:53:08.629834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:20.737 [2024-11-26 23:53:08.629852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:20.737 [2024-11-26 23:53:08.629860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:20.737 [2024-11-26 23:53:08.629876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.737 [2024-11-26 23:53:08.629900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:20.737 [2024-11-26 23:53:08.629907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.737 [2024-11-26 23:53:08.629919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:20.737 [2024-11-26 23:53:08.629926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.737 [2024-11-26 23:53:08.629939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:20.737 [2024-11-26 23:53:08.629946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.737 [2024-11-26 23:53:08.629960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:20.737 [2024-11-26 23:53:08.629967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:20.737 [2024-11-26 23:53:08.629973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.737 [2024-11-26 23:53:08.629980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:20.737 [2024-11-26 23:53:08.629987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:20.737 [2024-11-26 23:53:08.629994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.737 [2024-11-26 23:53:08.630001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:20.737 [2024-11-26 23:53:08.630011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:20.737 [2024-11-26 23:53:08.630018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.737 [2024-11-26 23:53:08.630025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:20.737 [2024-11-26 23:53:08.630031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:20.737 [2024-11-26 23:53:08.630038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.737 [2024-11-26 23:53:08.630046] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:20.737 [2024-11-26 23:53:08.630054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:20.737 [2024-11-26 23:53:08.630063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.737 [2024-11-26 23:53:08.630070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.737 [2024-11-26 23:53:08.630077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:20.737 [2024-11-26 23:53:08.630084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:20.737 [2024-11-26 23:53:08.630090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:20.737 [2024-11-26 23:53:08.630097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:20.737 [2024-11-26 23:53:08.630103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:20.737 [2024-11-26 23:53:08.630110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:20.737 [2024-11-26 23:53:08.630119] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:20.737 [2024-11-26 23:53:08.630131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.737 [2024-11-26 23:53:08.630143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:20.737 [2024-11-26 23:53:08.630150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:20.737 [2024-11-26 23:53:08.630158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:20.737 [2024-11-26 23:53:08.630165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:20.737 [2024-11-26 23:53:08.630172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:20.737 [2024-11-26 23:53:08.630180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:20.737 [2024-11-26 23:53:08.630188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:20.737 [2024-11-26 23:53:08.630201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:20.737 [2024-11-26 23:53:08.630208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:20.737 [2024-11-26 23:53:08.630215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:20.737 [2024-11-26 23:53:08.630222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:20.737 [2024-11-26 23:53:08.630229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:20.737 [2024-11-26 23:53:08.630237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:20.737 [2024-11-26 23:53:08.630244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:20.737 [2024-11-26 23:53:08.630252] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:20.737 [2024-11-26 23:53:08.630266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.737 [2024-11-26 23:53:08.630275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:20.737 [2024-11-26 23:53:08.630283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:20.737 [2024-11-26 23:53:08.630290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:20.737 [2024-11-26 23:53:08.630297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:20.737 [2024-11-26 23:53:08.630306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.737 [2024-11-26 23:53:08.630314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:20.737 [2024-11-26 23:53:08.630322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:19:20.737 [2024-11-26 23:53:08.630330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.737 [2024-11-26 23:53:08.648444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.737 [2024-11-26 23:53:08.648613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.737 [2024-11-26 23:53:08.648697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.042 ms 00:19:20.737 [2024-11-26 23:53:08.648722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.737 [2024-11-26 23:53:08.648885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.737 [2024-11-26 23:53:08.649433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.737 [2024-11-26 23:53:08.649491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:20.737 [2024-11-26 23:53:08.649515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.737 [2024-11-26 23:53:08.675345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.675577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.738 [2024-11-26 23:53:08.675671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.766 ms 00:19:20.738 [2024-11-26 23:53:08.675717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.675897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.675944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.738 [2024-11-26 23:53:08.675978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:20.738 [2024-11-26 23:53:08.676125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.676828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.676945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.738 [2024-11-26 23:53:08.677002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:19:20.738 [2024-11-26 23:53:08.677032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.677228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.677343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.738 [2024-11-26 23:53:08.677392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:20.738 [2024-11-26 23:53:08.677414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.687844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.687984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.738 [2024-11-26 23:53:08.688049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.371 ms 00:19:20.738 [2024-11-26 23:53:08.688073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.692238] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:20.738 [2024-11-26 23:53:08.692412] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.738 [2024-11-26 23:53:08.692504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.692674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.738 [2024-11-26 23:53:08.692862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:19:20.738 [2024-11-26 23:53:08.692957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.708655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.708833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.738 [2024-11-26 23:53:08.708894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.608 ms 00:19:20.738 [2024-11-26 23:53:08.708917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.712202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.712378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.738 [2024-11-26 23:53:08.712443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:19:20.738 [2024-11-26 23:53:08.712466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.715441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.715614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.738 [2024-11-26 23:53:08.715677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.526 ms 00:19:20.738 [2024-11-26 23:53:08.715701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.716451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.716632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.738 [2024-11-26 23:53:08.716701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:19:20.738 [2024-11-26 23:53:08.716725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.742880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.743103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.738 [2024-11-26 23:53:08.743168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.107 ms 00:19:20.738 [2024-11-26 23:53:08.743193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.751282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:20.738 [2024-11-26 23:53:08.773770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.773968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.738 [2024-11-26 23:53:08.774002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.476 ms 00:19:20.738 [2024-11-26 23:53:08.774018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.774127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.774143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.738 [2024-11-26 23:53:08.774154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:20.738 [2024-11-26 23:53:08.774166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.774241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.774252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.738 [2024-11-26 23:53:08.774261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:20.738 [2024-11-26 23:53:08.774270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.774295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.774305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.738 [2024-11-26 23:53:08.774318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:20.738 [2024-11-26 23:53:08.774327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.774374] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.738 [2024-11-26 23:53:08.774385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.774393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.738 [2024-11-26 23:53:08.774402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:20.738 [2024-11-26 23:53:08.774410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.780967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.781136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.738 [2024-11-26 23:53:08.781166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.534 ms 00:19:20.738 [2024-11-26 23:53:08.781176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.781278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.738 [2024-11-26 23:53:08.781290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.738 [2024-11-26 23:53:08.781300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:20.738 [2024-11-26 23:53:08.781309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.738 [2024-11-26 23:53:08.782673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.738 [2024-11-26 23:53:08.784084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 180.074 ms, result 0 00:19:20.738 [2024-11-26 23:53:08.785383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.738 [2024-11-26 23:53:08.792845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:21.680  [2024-11-26T23:53:11.195Z] Copying: 17/256 [MB] (17 MBps) [2024-11-26T23:53:12.139Z] Copying: 39/256 [MB] (21 MBps) [2024-11-26T23:53:13.082Z] Copying: 61/256 [MB] (22 MBps) [2024-11-26T23:53:14.026Z] Copying: 77/256 [MB] (16 MBps) [2024-11-26T23:53:14.969Z] Copying: 93/256 [MB] (15 MBps) [2024-11-26T23:53:15.907Z] Copying: 106/256 [MB] (13 MBps) [2024-11-26T23:53:16.852Z] Copying: 140/256 [MB] (34 MBps) [2024-11-26T23:53:17.797Z] Copying: 158/256 [MB] (17 MBps) [2024-11-26T23:53:19.184Z] Copying: 173/256 [MB] (15 MBps) [2024-11-26T23:53:20.129Z] Copying: 189/256 [MB] (16 MBps) [2024-11-26T23:53:21.075Z] Copying: 208/256 [MB] (19 MBps) [2024-11-26T23:53:22.119Z] Copying: 223/256 [MB] (14 MBps) [2024-11-26T23:53:23.060Z] Copying: 243/256 [MB] (19 MBps) [2024-11-26T23:53:23.060Z] Copying: 254/256 [MB] (11 MBps) [2024-11-26T23:53:23.060Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-26 23:53:22.911473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.929 [2024-11-26 23:53:22.914008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.914070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.929 [2024-11-26 23:53:22.914086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:34.929 [2024-11-26 23:53:22.914095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.914119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:34.929 [2024-11-26 23:53:22.915095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.915133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.929 [2024-11-26 23:53:22.915146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:19:34.929 [2024-11-26 23:53:22.915156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.918251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.918297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.929 [2024-11-26 23:53:22.918310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:19:34.929 [2024-11-26 23:53:22.918326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.927303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.927350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.929 [2024-11-26 23:53:22.927372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.957 ms 00:19:34.929 [2024-11-26 23:53:22.927381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.934410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.934467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:34.929 [2024-11-26 23:53:22.934480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.981 ms 00:19:34.929 [2024-11-26 23:53:22.934490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.937334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.937386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.929 [2024-11-26 23:53:22.937397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.767 ms 00:19:34.929 [2024-11-26 23:53:22.937407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.942520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.942580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.929 [2024-11-26 23:53:22.942592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.066 ms 00:19:34.929 [2024-11-26 23:53:22.942601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.942752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.942764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.929 [2024-11-26 23:53:22.942774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:34.929 [2024-11-26 23:53:22.942787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.946198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.946248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:34.929 [2024-11-26 23:53:22.946258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:19:34.929 [2024-11-26 23:53:22.946266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.949349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.949397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:34.929 [2024-11-26 23:53:22.949408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:19:34.929 [2024-11-26 23:53:22.949418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.951815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.951861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.929 [2024-11-26 23:53:22.951871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.351 ms 00:19:34.929 [2024-11-26 23:53:22.951878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.954084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.929 [2024-11-26 23:53:22.954131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.929 [2024-11-26 23:53:22.954141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:19:34.929 [2024-11-26 23:53:22.954150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.929 [2024-11-26 23:53:22.954196] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.929 [2024-11-26 23:53:22.954226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.929 [2024-11-26 23:53:22.954553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.954996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.930 [2024-11-26 23:53:22.955199] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.930 [2024-11-26 23:53:22.955210] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f06a1c5-8be6-4e88-9b85-7277c6542241 00:19:34.930 [2024-11-26 23:53:22.955220] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:34.930 [2024-11-26 23:53:22.955230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:34.930 [2024-11-26 23:53:22.955239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:34.930 [2024-11-26 23:53:22.955249] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:34.930 [2024-11-26 23:53:22.955258] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.930 [2024-11-26 23:53:22.955268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.930 [2024-11-26 23:53:22.955277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.930 [2024-11-26 23:53:22.955285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.930 [2024-11-26 23:53:22.955293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.930 [2024-11-26 23:53:22.955301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.930 [2024-11-26 23:53:22.955314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.930 [2024-11-26 23:53:22.955328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:19:34.930 [2024-11-26 23:53:22.955335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.930 [2024-11-26 23:53:22.958528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.930 [2024-11-26 23:53:22.958554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:34.930 [2024-11-26 23:53:22.958565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.158 ms 00:19:34.930 [2024-11-26 23:53:22.958574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.930 [2024-11-26 23:53:22.958737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.930 [2024-11-26 23:53:22.958747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:34.930 [2024-11-26 23:53:22.958757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:34.930 [2024-11-26 23:53:22.958765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.930 [2024-11-26 23:53:22.969539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.930 [2024-11-26 23:53:22.969595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.930 [2024-11-26 23:53:22.969607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.930 [2024-11-26 23:53:22.969616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.930 [2024-11-26 23:53:22.969737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.930 [2024-11-26 23:53:22.969747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.930 [2024-11-26 23:53:22.969757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.930 [2024-11-26 23:53:22.969766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.930 [2024-11-26 23:53:22.969851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.930 [2024-11-26 23:53:22.969863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.930 [2024-11-26 23:53:22.969872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.930 [2024-11-26 23:53:22.969881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:22.969904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:22.969914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.931 [2024-11-26 23:53:22.969922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:22.969930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:22.989786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:22.989882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.931 [2024-11-26 23:53:22.989895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:22.989905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:23.005841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:23.005896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.931 [2024-11-26 23:53:23.005909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:23.005919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:23.006022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:23.006034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.931 [2024-11-26 23:53:23.006044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:23.006053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:23.006092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:23.006104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.931 [2024-11-26 23:53:23.006122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:23.006132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:23.006220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:23.006240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.931 [2024-11-26 23:53:23.006250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:23.006268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:23.006317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:23.006333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:34.931 [2024-11-26 23:53:23.006345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:23.006353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:23.006410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:23.006422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.931 [2024-11-26 23:53:23.006434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:23.006443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:23.006508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.931 [2024-11-26 23:53:23.006522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.931 [2024-11-26 23:53:23.006540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.931 [2024-11-26 23:53:23.006550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.931 [2024-11-26 23:53:23.006750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 92.699 ms, result 0 00:19:35.500 00:19:35.500 00:19:35.500 23:53:23 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87700 00:19:35.500 23:53:23 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87700 00:19:35.500 23:53:23 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:35.500 23:53:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87700 ']' 00:19:35.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.500 23:53:23 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.500 23:53:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.500 23:53:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.500 23:53:23 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.500 23:53:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:35.761 [2024-11-26 23:53:23.677360] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:19:35.762 [2024-11-26 23:53:23.677835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87700 ] 00:19:35.762 [2024-11-26 23:53:23.825732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.762 [2024-11-26 23:53:23.867309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.705 23:53:24 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.705 23:53:24 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:36.705 23:53:24 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:36.705 [2024-11-26 23:53:24.751656] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.705 [2024-11-26 23:53:24.751755] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.969 [2024-11-26 23:53:24.932981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.933051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:36.969 [2024-11-26 23:53:24.933068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:36.969 [2024-11-26 23:53:24.933080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.935885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.936170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.969 [2024-11-26 23:53:24.936193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.777 ms 00:19:36.969 [2024-11-26 23:53:24.936204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.936331] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:36.969 [2024-11-26 23:53:24.936629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:36.969 [2024-11-26 23:53:24.936648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.936660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.969 [2024-11-26 23:53:24.936674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:19:36.969 [2024-11-26 23:53:24.936685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.939074] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:36.969 [2024-11-26 23:53:24.943952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.944010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:36.969 [2024-11-26 23:53:24.944025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.874 ms 00:19:36.969 [2024-11-26 23:53:24.944037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.944128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.944139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:36.969 [2024-11-26 23:53:24.944155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:36.969 [2024-11-26 23:53:24.944167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.955849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.955891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.969 [2024-11-26 23:53:24.955907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.621 ms 00:19:36.969 [2024-11-26 23:53:24.955916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.956067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.956079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.969 [2024-11-26 23:53:24.956097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:36.969 [2024-11-26 23:53:24.956105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.956136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.956148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:36.969 [2024-11-26 23:53:24.956159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:36.969 [2024-11-26 23:53:24.956167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.956197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:36.969 [2024-11-26 23:53:24.958950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.958997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.969 [2024-11-26 23:53:24.959012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:19:36.969 [2024-11-26 23:53:24.959028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.959078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.969 [2024-11-26 23:53:24.959089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:36.969 [2024-11-26 23:53:24.959099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:36.969 [2024-11-26 23:53:24.959109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.969 [2024-11-26 23:53:24.959133] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:36.969 [2024-11-26 23:53:24.959165] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:36.969 [2024-11-26 23:53:24.959208] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:36.969 [2024-11-26 23:53:24.959231] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:36.970 [2024-11-26 23:53:24.959343] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:36.970 [2024-11-26 23:53:24.959359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:36.970 [2024-11-26 23:53:24.959372] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:36.970 [2024-11-26 23:53:24.959386] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:36.970 [2024-11-26 23:53:24.959396] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:36.970 [2024-11-26 23:53:24.959412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:36.970 [2024-11-26 23:53:24.959420] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:36.970 [2024-11-26 23:53:24.959433] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:36.970 [2024-11-26 23:53:24.959445] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:36.970 [2024-11-26 23:53:24.959456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.970 [2024-11-26 23:53:24.959465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:36.970 [2024-11-26 23:53:24.959477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:36.970 [2024-11-26 23:53:24.959485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.970 [2024-11-26 23:53:24.959575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.970 [2024-11-26 23:53:24.959585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:36.970 [2024-11-26 23:53:24.959598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:36.970 [2024-11-26 23:53:24.959609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.970 [2024-11-26 23:53:24.959718] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:36.970 [2024-11-26 23:53:24.959731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:36.970 [2024-11-26 23:53:24.959744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.970 [2024-11-26 23:53:24.959759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.970 [2024-11-26 23:53:24.959774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:36.970 [2024-11-26 23:53:24.959782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:36.970 [2024-11-26 23:53:24.959818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:36.970 [2024-11-26 23:53:24.959830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:36.970 [2024-11-26 23:53:24.959841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:36.970 [2024-11-26 23:53:24.959849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.970 [2024-11-26 23:53:24.959860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:36.970 [2024-11-26 23:53:24.959870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:36.970 [2024-11-26 23:53:24.959881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.970 [2024-11-26 23:53:24.959889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:36.970 [2024-11-26 23:53:24.959901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:36.970 [2024-11-26 23:53:24.959909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.970 [2024-11-26 23:53:24.959922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:36.970 [2024-11-26 23:53:24.959933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:36.970 [2024-11-26 23:53:24.959944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.970 [2024-11-26 23:53:24.959952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:36.970 [2024-11-26 23:53:24.959966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:36.970 [2024-11-26 23:53:24.959974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.970 [2024-11-26 23:53:24.959983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:36.970 [2024-11-26 23:53:24.959993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:36.970 [2024-11-26 23:53:24.960003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.970 [2024-11-26 23:53:24.960010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:36.970 [2024-11-26 23:53:24.960019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:36.970 [2024-11-26 23:53:24.960025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.970 [2024-11-26 23:53:24.960035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:36.970 [2024-11-26 23:53:24.960042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:36.970 [2024-11-26 23:53:24.960050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.970 [2024-11-26 23:53:24.960057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:36.970 [2024-11-26 23:53:24.960069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:36.970 [2024-11-26 23:53:24.960075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.970 [2024-11-26 23:53:24.960084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:36.970 [2024-11-26 23:53:24.960090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:36.970 [2024-11-26 23:53:24.960102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.970 [2024-11-26 23:53:24.960110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:36.970 [2024-11-26 23:53:24.960119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:36.970 [2024-11-26 23:53:24.960126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.970 [2024-11-26 23:53:24.960136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:36.970 [2024-11-26 23:53:24.960142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:36.970 [2024-11-26 23:53:24.960152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.970 [2024-11-26 23:53:24.960170] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:36.970 [2024-11-26 23:53:24.960181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:36.970 [2024-11-26 23:53:24.960189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.970 [2024-11-26 23:53:24.960202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.970 [2024-11-26 23:53:24.960211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:36.970 [2024-11-26 23:53:24.960221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:36.970 [2024-11-26 23:53:24.960228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:36.970 [2024-11-26 23:53:24.960237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:36.970 [2024-11-26 23:53:24.960244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:36.970 [2024-11-26 23:53:24.960254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:36.970 [2024-11-26 23:53:24.960263] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:36.970 [2024-11-26 23:53:24.960277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.970 [2024-11-26 23:53:24.960291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:36.970 [2024-11-26 23:53:24.960300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:36.970 [2024-11-26 23:53:24.960308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:36.970 [2024-11-26 23:53:24.960317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:36.970 [2024-11-26 23:53:24.960325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:36.970 [2024-11-26 23:53:24.960335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:36.970 [2024-11-26 23:53:24.960343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:36.970 [2024-11-26 23:53:24.960351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:36.970 [2024-11-26 23:53:24.960358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:36.970 [2024-11-26 23:53:24.960368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:36.970 [2024-11-26 23:53:24.960375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:36.970 [2024-11-26 23:53:24.960392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:36.970 [2024-11-26 23:53:24.960400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:36.970 [2024-11-26 23:53:24.960411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:36.970 [2024-11-26 23:53:24.960418] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:36.971 [2024-11-26 23:53:24.960429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.971 [2024-11-26 23:53:24.960439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:36.971 [2024-11-26 23:53:24.960448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:36.971 [2024-11-26 23:53:24.960456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:36.971 [2024-11-26 23:53:24.960465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:36.971 [2024-11-26 23:53:24.960473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:24.960484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:36.971 [2024-11-26 23:53:24.960492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:19:36.971 [2024-11-26 23:53:24.960502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:24.981135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:24.981188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.971 [2024-11-26 23:53:24.981201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.546 ms 00:19:36.971 [2024-11-26 23:53:24.981211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:24.981357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:24.981381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:36.971 [2024-11-26 23:53:24.981391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:36.971 [2024-11-26 23:53:24.981401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:24.998932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:24.998982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.971 [2024-11-26 23:53:24.999000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.507 ms 00:19:36.971 [2024-11-26 23:53:24.999016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:24.999096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:24.999113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.971 [2024-11-26 23:53:24.999122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:36.971 [2024-11-26 23:53:24.999134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:24.999856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:24.999896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.971 [2024-11-26 23:53:24.999910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:19:36.971 [2024-11-26 23:53:24.999923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.000100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:25.000116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.971 [2024-11-26 23:53:25.000125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:19:36.971 [2024-11-26 23:53:25.000143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.012077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:25.012130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.971 [2024-11-26 23:53:25.012143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.906 ms 00:19:36.971 [2024-11-26 23:53:25.012155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.029505] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:36.971 [2024-11-26 23:53:25.029894] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:36.971 [2024-11-26 23:53:25.029922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:25.029938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:36.971 [2024-11-26 23:53:25.029953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.630 ms 00:19:36.971 [2024-11-26 23:53:25.029966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.046840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:25.046917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:36.971 [2024-11-26 23:53:25.046931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.805 ms 00:19:36.971 [2024-11-26 23:53:25.046946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.050050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:25.050107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:36.971 [2024-11-26 23:53:25.050118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:19:36.971 [2024-11-26 23:53:25.050128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.052744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:25.052823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:36.971 [2024-11-26 23:53:25.052834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:19:36.971 [2024-11-26 23:53:25.052846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.053234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:25.053253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:36.971 [2024-11-26 23:53:25.053263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:36.971 [2024-11-26 23:53:25.053275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.082960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.971 [2024-11-26 23:53:25.083033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:36.971 [2024-11-26 23:53:25.083047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.660 ms 00:19:36.971 [2024-11-26 23:53:25.083063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.971 [2024-11-26 23:53:25.091729] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:37.234 [2024-11-26 23:53:25.116416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.234 [2024-11-26 23:53:25.116466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:37.234 [2024-11-26 23:53:25.116483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.230 ms 00:19:37.234 [2024-11-26 23:53:25.116500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.234 [2024-11-26 23:53:25.116614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.234 [2024-11-26 23:53:25.116635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:37.234 [2024-11-26 23:53:25.116648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:37.234 [2024-11-26 23:53:25.116657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.234 [2024-11-26 23:53:25.116742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.234 [2024-11-26 23:53:25.116753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:37.234 [2024-11-26 23:53:25.116765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:37.234 [2024-11-26 23:53:25.116775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.234 [2024-11-26 23:53:25.116858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.234 [2024-11-26 23:53:25.116869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.234 [2024-11-26 23:53:25.116886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:37.234 [2024-11-26 23:53:25.116897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.234 [2024-11-26 23:53:25.116944] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:37.234 [2024-11-26 23:53:25.116956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.234 [2024-11-26 23:53:25.116966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:37.234 [2024-11-26 23:53:25.116975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:37.234 [2024-11-26 23:53:25.116987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.234 [2024-11-26 23:53:25.123974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.234 [2024-11-26 23:53:25.124036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.234 [2024-11-26 23:53:25.124049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:19:37.234 [2024-11-26 23:53:25.124064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.234 [2024-11-26 23:53:25.124172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.234 [2024-11-26 23:53:25.124187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.234 [2024-11-26 23:53:25.124198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:37.234 [2024-11-26 23:53:25.124209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.234 [2024-11-26 23:53:25.125577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.234 [2024-11-26 23:53:25.127115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 192.179 ms, result 0 00:19:37.234 [2024-11-26 23:53:25.130015] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.234 Some configs were skipped because the RPC state that can call them passed over. 00:19:37.234 23:53:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:37.496 [2024-11-26 23:53:25.378682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.496 [2024-11-26 23:53:25.378906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:37.496 [2024-11-26 23:53:25.379032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.673 ms 00:19:37.496 [2024-11-26 23:53:25.379072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.496 [2024-11-26 23:53:25.379144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.140 ms, result 0 00:19:37.496 true 00:19:37.496 23:53:25 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:37.496 [2024-11-26 23:53:25.598999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.496 [2024-11-26 23:53:25.599063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:37.496 [2024-11-26 23:53:25.599076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.795 ms 00:19:37.496 [2024-11-26 23:53:25.599087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.496 [2024-11-26 23:53:25.599130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.927 ms, result 0 00:19:37.496 true 00:19:37.496 23:53:25 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87700 00:19:37.496 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87700 ']' 00:19:37.496 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87700 00:19:37.496 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:37.496 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.496 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87700 00:19:37.758 killing process with pid 87700 00:19:37.758 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.758 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.758 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87700' 00:19:37.758 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87700 00:19:37.758 23:53:25 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87700 00:19:37.758 [2024-11-26 23:53:25.846322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.758 [2024-11-26 23:53:25.846403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.758 [2024-11-26 23:53:25.846423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.758 [2024-11-26 23:53:25.846432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.758 [2024-11-26 23:53:25.846465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:37.758 [2024-11-26 23:53:25.847407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.758 [2024-11-26 23:53:25.847445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.758 [2024-11-26 23:53:25.847458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:19:37.758 [2024-11-26 23:53:25.847470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.758 [2024-11-26 23:53:25.847780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.758 [2024-11-26 23:53:25.847809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.758 [2024-11-26 23:53:25.847820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:37.758 [2024-11-26 23:53:25.847830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.758 [2024-11-26 23:53:25.852497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.758 [2024-11-26 23:53:25.852548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:37.758 [2024-11-26 23:53:25.852561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:19:37.758 [2024-11-26 23:53:25.852574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.758 [2024-11-26 23:53:25.859798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.759 [2024-11-26 23:53:25.859995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:37.759 [2024-11-26 23:53:25.860017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.166 ms 00:19:37.759 [2024-11-26 23:53:25.860035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.759 [2024-11-26 23:53:25.863284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.759 [2024-11-26 23:53:25.863452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:37.759 [2024-11-26 23:53:25.863470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:19:37.759 [2024-11-26 23:53:25.863482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.759 [2024-11-26 23:53:25.869005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.759 [2024-11-26 23:53:25.869071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:37.759 [2024-11-26 23:53:25.869085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.477 ms 00:19:37.759 [2024-11-26 23:53:25.869096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.759 [2024-11-26 23:53:25.869262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.759 [2024-11-26 23:53:25.869276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:37.759 [2024-11-26 23:53:25.869286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:37.759 [2024-11-26 23:53:25.869297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.759 [2024-11-26 23:53:25.872713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.759 [2024-11-26 23:53:25.872784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:37.759 [2024-11-26 23:53:25.872813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.396 ms 00:19:37.759 [2024-11-26 23:53:25.872826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.759 [2024-11-26 23:53:25.875551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.759 [2024-11-26 23:53:25.875609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:37.759 [2024-11-26 23:53:25.875618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:19:37.759 [2024-11-26 23:53:25.875628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.759 [2024-11-26 23:53:25.878121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.759 [2024-11-26 23:53:25.878321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.759 [2024-11-26 23:53:25.878342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:19:37.759 [2024-11-26 23:53:25.878353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.759 [2024-11-26 23:53:25.880681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.759 [2024-11-26 23:53:25.880740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.759 [2024-11-26 23:53:25.880750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:19:37.759 [2024-11-26 23:53:25.880759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.759 [2024-11-26 23:53:25.880828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.759 [2024-11-26 23:53:25.880849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.880994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.759 [2024-11-26 23:53:25.881439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.881787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.760 [2024-11-26 23:53:25.882115] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.760 [2024-11-26 23:53:25.882139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f06a1c5-8be6-4e88-9b85-7277c6542241 00:19:37.760 [2024-11-26 23:53:25.882175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.760 [2024-11-26 23:53:25.882195] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.760 [2024-11-26 23:53:25.882216] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.760 [2024-11-26 23:53:25.882236] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.760 [2024-11-26 23:53:25.882257] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.760 [2024-11-26 23:53:25.882281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.760 [2024-11-26 23:53:25.882302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.760 [2024-11-26 23:53:25.882321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.760 [2024-11-26 23:53:25.882341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.760 [2024-11-26 23:53:25.882362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.760 [2024-11-26 23:53:25.882384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.760 [2024-11-26 23:53:25.882404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:19:37.760 [2024-11-26 23:53:25.882437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.760 [2024-11-26 23:53:25.885848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.760 [2024-11-26 23:53:25.885996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.760 [2024-11-26 23:53:25.886061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.196 ms 00:19:37.760 [2024-11-26 23:53:25.886089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.760 [2024-11-26 23:53:25.886266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.760 [2024-11-26 23:53:25.886344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.760 [2024-11-26 23:53:25.886372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:37.760 [2024-11-26 23:53:25.886395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.897559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.897750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.022 [2024-11-26 23:53:25.897872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.897905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.898062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.898081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.022 [2024-11-26 23:53:25.898092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.898105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.898167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.898180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.022 [2024-11-26 23:53:25.898189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.898199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.898219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.898230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.022 [2024-11-26 23:53:25.898239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.898249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.919267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.919340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.022 [2024-11-26 23:53:25.919354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.919376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.935315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.935384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.022 [2024-11-26 23:53:25.935397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.935413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.935504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.935518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.022 [2024-11-26 23:53:25.935528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.935541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.935583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.935596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.022 [2024-11-26 23:53:25.935605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.935616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.935712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.935727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.022 [2024-11-26 23:53:25.935737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.935749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.935816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.935831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.022 [2024-11-26 23:53:25.935840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.935854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.935911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.935927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.022 [2024-11-26 23:53:25.935937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.935949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.936017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.022 [2024-11-26 23:53:25.936031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.022 [2024-11-26 23:53:25.936041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.022 [2024-11-26 23:53:25.936052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.022 [2024-11-26 23:53:25.936242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.885 ms, result 0 00:19:38.283 23:53:26 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:38.283 23:53:26 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:38.283 [2024-11-26 23:53:26.331836] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:19:38.283 [2024-11-26 23:53:26.331998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87740 ] 00:19:38.544 [2024-11-26 23:53:26.479315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.544 [2024-11-26 23:53:26.520698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.807 [2024-11-26 23:53:26.673565] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.807 [2024-11-26 23:53:26.673666] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.807 [2024-11-26 23:53:26.837879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.837940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.807 [2024-11-26 23:53:26.837957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:38.807 [2024-11-26 23:53:26.837966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.840697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.840930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.807 [2024-11-26 23:53:26.840952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:19:38.807 [2024-11-26 23:53:26.840961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.841079] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.807 [2024-11-26 23:53:26.841371] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.807 [2024-11-26 23:53:26.841391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.841400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.807 [2024-11-26 23:53:26.841410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:19:38.807 [2024-11-26 23:53:26.841419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.843829] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:38.807 [2024-11-26 23:53:26.848603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.848813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:38.807 [2024-11-26 23:53:26.848842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.776 ms 00:19:38.807 [2024-11-26 23:53:26.848851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.848933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.848944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:38.807 [2024-11-26 23:53:26.848954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:38.807 [2024-11-26 23:53:26.848962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.860577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.860750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.807 [2024-11-26 23:53:26.860779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.566 ms 00:19:38.807 [2024-11-26 23:53:26.860813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.860970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.860982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.807 [2024-11-26 23:53:26.860993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:38.807 [2024-11-26 23:53:26.861004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.861033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.861043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.807 [2024-11-26 23:53:26.861056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.807 [2024-11-26 23:53:26.861063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.861086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:38.807 [2024-11-26 23:53:26.863815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.863854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.807 [2024-11-26 23:53:26.863866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:19:38.807 [2024-11-26 23:53:26.863880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.863939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.807 [2024-11-26 23:53:26.863956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.807 [2024-11-26 23:53:26.863969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:38.807 [2024-11-26 23:53:26.863977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.807 [2024-11-26 23:53:26.863998] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:38.807 [2024-11-26 23:53:26.864023] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:38.807 [2024-11-26 23:53:26.864072] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:38.807 [2024-11-26 23:53:26.864095] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:38.807 [2024-11-26 23:53:26.864212] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.807 [2024-11-26 23:53:26.864223] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.807 [2024-11-26 23:53:26.864235] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.807 [2024-11-26 23:53:26.864245] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864254] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864263] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:38.808 [2024-11-26 23:53:26.864272] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.808 [2024-11-26 23:53:26.864279] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.808 [2024-11-26 23:53:26.864289] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.808 [2024-11-26 23:53:26.864300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.808 [2024-11-26 23:53:26.864308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.808 [2024-11-26 23:53:26.864317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:38.808 [2024-11-26 23:53:26.864324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.808 [2024-11-26 23:53:26.864416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.808 [2024-11-26 23:53:26.864431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.808 [2024-11-26 23:53:26.864439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:38.808 [2024-11-26 23:53:26.864451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.808 [2024-11-26 23:53:26.864553] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.808 [2024-11-26 23:53:26.864568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.808 [2024-11-26 23:53:26.864577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.808 [2024-11-26 23:53:26.864607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.808 [2024-11-26 23:53:26.864635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.808 [2024-11-26 23:53:26.864652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.808 [2024-11-26 23:53:26.864660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:38.808 [2024-11-26 23:53:26.864667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.808 [2024-11-26 23:53:26.864676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.808 [2024-11-26 23:53:26.864684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:38.808 [2024-11-26 23:53:26.864691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.808 [2024-11-26 23:53:26.864708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.808 [2024-11-26 23:53:26.864736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.808 [2024-11-26 23:53:26.864765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.808 [2024-11-26 23:53:26.864814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.808 [2024-11-26 23:53:26.864837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.808 [2024-11-26 23:53:26.864859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.808 [2024-11-26 23:53:26.864872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.808 [2024-11-26 23:53:26.864880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:38.808 [2024-11-26 23:53:26.864886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.808 [2024-11-26 23:53:26.864893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.808 [2024-11-26 23:53:26.864900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:38.808 [2024-11-26 23:53:26.864910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.808 [2024-11-26 23:53:26.864925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:38.808 [2024-11-26 23:53:26.864932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864939] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.808 [2024-11-26 23:53:26.864948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.808 [2024-11-26 23:53:26.864956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.808 [2024-11-26 23:53:26.864968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.808 [2024-11-26 23:53:26.864976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.808 [2024-11-26 23:53:26.864984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.808 [2024-11-26 23:53:26.864990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.808 [2024-11-26 23:53:26.864998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.808 [2024-11-26 23:53:26.865009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.808 [2024-11-26 23:53:26.865016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.808 [2024-11-26 23:53:26.865025] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.808 [2024-11-26 23:53:26.865036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.808 [2024-11-26 23:53:26.865048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:38.808 [2024-11-26 23:53:26.865056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:38.808 [2024-11-26 23:53:26.865064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:38.808 [2024-11-26 23:53:26.865071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:38.808 [2024-11-26 23:53:26.865079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:38.808 [2024-11-26 23:53:26.865086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:38.808 [2024-11-26 23:53:26.865094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:38.808 [2024-11-26 23:53:26.865109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:38.808 [2024-11-26 23:53:26.865116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:38.808 [2024-11-26 23:53:26.865124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:38.808 [2024-11-26 23:53:26.865132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:38.808 [2024-11-26 23:53:26.865139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:38.808 [2024-11-26 23:53:26.865146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:38.808 [2024-11-26 23:53:26.865153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:38.808 [2024-11-26 23:53:26.865161] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.808 [2024-11-26 23:53:26.865186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.808 [2024-11-26 23:53:26.865198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.808 [2024-11-26 23:53:26.865205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.808 [2024-11-26 23:53:26.865213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.808 [2024-11-26 23:53:26.865222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.808 [2024-11-26 23:53:26.865230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.808 [2024-11-26 23:53:26.865239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.808 [2024-11-26 23:53:26.865251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:19:38.808 [2024-11-26 23:53:26.865259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.808 [2024-11-26 23:53:26.885819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.808 [2024-11-26 23:53:26.885867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.808 [2024-11-26 23:53:26.885880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.484 ms 00:19:38.808 [2024-11-26 23:53:26.885889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.808 [2024-11-26 23:53:26.886039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.808 [2024-11-26 23:53:26.886055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.808 [2024-11-26 23:53:26.886065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:38.809 [2024-11-26 23:53:26.886073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.809 [2024-11-26 23:53:26.913046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.809 [2024-11-26 23:53:26.913118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.809 [2024-11-26 23:53:26.913150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.947 ms 00:19:38.809 [2024-11-26 23:53:26.913165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.809 [2024-11-26 23:53:26.913309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.809 [2024-11-26 23:53:26.913329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.809 [2024-11-26 23:53:26.913344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.809 [2024-11-26 23:53:26.913356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.809 [2024-11-26 23:53:26.914188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.809 [2024-11-26 23:53:26.914230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.809 [2024-11-26 23:53:26.914249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:19:38.809 [2024-11-26 23:53:26.914264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.809 [2024-11-26 23:53:26.914513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.809 [2024-11-26 23:53:26.914545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.809 [2024-11-26 23:53:26.914558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:19:38.809 [2024-11-26 23:53:26.914571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.809 [2024-11-26 23:53:26.926745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.809 [2024-11-26 23:53:26.926813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.809 [2024-11-26 23:53:26.926833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.141 ms 00:19:38.809 [2024-11-26 23:53:26.926843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.809 [2024-11-26 23:53:26.931877] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:38.809 [2024-11-26 23:53:26.931927] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.809 [2024-11-26 23:53:26.931941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.809 [2024-11-26 23:53:26.931950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.809 [2024-11-26 23:53:26.931960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.953 ms 00:19:38.809 [2024-11-26 23:53:26.931968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.070 [2024-11-26 23:53:26.948391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.070 [2024-11-26 23:53:26.948442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:39.070 [2024-11-26 23:53:26.948455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.342 ms 00:19:39.070 [2024-11-26 23:53:26.948464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.070 [2024-11-26 23:53:26.951457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.070 [2024-11-26 23:53:26.951505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:39.070 [2024-11-26 23:53:26.951516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:19:39.070 [2024-11-26 23:53:26.951524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.070 [2024-11-26 23:53:26.954132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.070 [2024-11-26 23:53:26.954178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:39.070 [2024-11-26 23:53:26.954189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:19:39.070 [2024-11-26 23:53:26.954196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.070 [2024-11-26 23:53:26.954561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.070 [2024-11-26 23:53:26.954574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:39.070 [2024-11-26 23:53:26.954584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:19:39.070 [2024-11-26 23:53:26.954592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.070 [2024-11-26 23:53:26.983809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.070 [2024-11-26 23:53:26.983888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:39.070 [2024-11-26 23:53:26.983903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.192 ms 00:19:39.070 [2024-11-26 23:53:26.983912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.070 [2024-11-26 23:53:26.992591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:39.070 [2024-11-26 23:53:27.017429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.070 [2024-11-26 23:53:27.017501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:39.070 [2024-11-26 23:53:27.017515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.412 ms 00:19:39.070 [2024-11-26 23:53:27.017526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.070 [2024-11-26 23:53:27.017637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.070 [2024-11-26 23:53:27.017654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:39.070 [2024-11-26 23:53:27.017665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:39.070 [2024-11-26 23:53:27.017674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.070 [2024-11-26 23:53:27.017760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.070 [2024-11-26 23:53:27.017771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:39.071 [2024-11-26 23:53:27.017781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:39.071 [2024-11-26 23:53:27.017831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.071 [2024-11-26 23:53:27.017868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.071 [2024-11-26 23:53:27.017878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:39.071 [2024-11-26 23:53:27.017892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:39.071 [2024-11-26 23:53:27.017901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.071 [2024-11-26 23:53:27.017946] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:39.071 [2024-11-26 23:53:27.017957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.071 [2024-11-26 23:53:27.017966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:39.071 [2024-11-26 23:53:27.017975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:39.071 [2024-11-26 23:53:27.017983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.071 [2024-11-26 23:53:27.025053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.071 [2024-11-26 23:53:27.025106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:39.071 [2024-11-26 23:53:27.025120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.042 ms 00:19:39.071 [2024-11-26 23:53:27.025137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.071 [2024-11-26 23:53:27.025244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.071 [2024-11-26 23:53:27.025256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:39.071 [2024-11-26 23:53:27.025273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:39.071 [2024-11-26 23:53:27.025282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.071 [2024-11-26 23:53:27.027262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:39.071 [2024-11-26 23:53:27.028963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 189.003 ms, result 0 00:19:39.071 [2024-11-26 23:53:27.030309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:39.071 [2024-11-26 23:53:27.037632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.012  [2024-11-26T23:53:29.089Z] Copying: 16/256 [MB] (16 MBps) [2024-11-26T23:53:30.478Z] Copying: 27/256 [MB] (10 MBps) [2024-11-26T23:53:31.051Z] Copying: 44/256 [MB] (17 MBps) [2024-11-26T23:53:32.442Z] Copying: 57/256 [MB] (13 MBps) [2024-11-26T23:53:33.389Z] Copying: 67/256 [MB] (10 MBps) [2024-11-26T23:53:34.332Z] Copying: 78/256 [MB] (10 MBps) [2024-11-26T23:53:35.278Z] Copying: 90/256 [MB] (12 MBps) [2024-11-26T23:53:36.224Z] Copying: 100/256 [MB] (10 MBps) [2024-11-26T23:53:37.171Z] Copying: 111/256 [MB] (10 MBps) [2024-11-26T23:53:38.118Z] Copying: 123/256 [MB] (11 MBps) [2024-11-26T23:53:39.063Z] Copying: 138/256 [MB] (15 MBps) [2024-11-26T23:53:40.449Z] Copying: 153/256 [MB] (14 MBps) [2024-11-26T23:53:41.393Z] Copying: 169/256 [MB] (15 MBps) [2024-11-26T23:53:42.340Z] Copying: 190/256 [MB] (21 MBps) [2024-11-26T23:53:43.287Z] Copying: 212/256 [MB] (21 MBps) [2024-11-26T23:53:44.231Z] Copying: 232/256 [MB] (19 MBps) [2024-11-26T23:53:44.495Z] Copying: 252/256 [MB] (19 MBps) [2024-11-26T23:53:44.495Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-26 23:53:44.380713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.364 [2024-11-26 23:53:44.383331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.383392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:56.364 [2024-11-26 23:53:44.383409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:56.364 [2024-11-26 23:53:44.383419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.383442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:56.364 [2024-11-26 23:53:44.384388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.384436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:56.364 [2024-11-26 23:53:44.384447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:19:56.364 [2024-11-26 23:53:44.384457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.384747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.384770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:56.364 [2024-11-26 23:53:44.384780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:19:56.364 [2024-11-26 23:53:44.384809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.388577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.388604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:56.364 [2024-11-26 23:53:44.388616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.745 ms 00:19:56.364 [2024-11-26 23:53:44.388624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.395609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.395652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:56.364 [2024-11-26 23:53:44.395672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.964 ms 00:19:56.364 [2024-11-26 23:53:44.395680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.398776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.398838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:56.364 [2024-11-26 23:53:44.398849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:19:56.364 [2024-11-26 23:53:44.398857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.404175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.404225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:56.364 [2024-11-26 23:53:44.404235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.270 ms 00:19:56.364 [2024-11-26 23:53:44.404244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.404382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.404393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:56.364 [2024-11-26 23:53:44.404415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:56.364 [2024-11-26 23:53:44.404423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.407638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.407686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:56.364 [2024-11-26 23:53:44.407697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:19:56.364 [2024-11-26 23:53:44.407704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.410771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.410834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:56.364 [2024-11-26 23:53:44.410845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:19:56.364 [2024-11-26 23:53:44.410854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.413151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.413199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:56.364 [2024-11-26 23:53:44.413208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.251 ms 00:19:56.364 [2024-11-26 23:53:44.413215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.415247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.364 [2024-11-26 23:53:44.415293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:56.364 [2024-11-26 23:53:44.415303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:19:56.364 [2024-11-26 23:53:44.415311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.364 [2024-11-26 23:53:44.415352] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:56.364 [2024-11-26 23:53:44.415372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:56.364 [2024-11-26 23:53:44.415694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.415996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:56.365 [2024-11-26 23:53:44.416188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:56.365 [2024-11-26 23:53:44.416197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f06a1c5-8be6-4e88-9b85-7277c6542241 00:19:56.365 [2024-11-26 23:53:44.416205] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:56.365 [2024-11-26 23:53:44.416212] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:56.365 [2024-11-26 23:53:44.416220] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:56.365 [2024-11-26 23:53:44.416229] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:56.365 [2024-11-26 23:53:44.416239] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:56.365 [2024-11-26 23:53:44.416247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:56.365 [2024-11-26 23:53:44.416261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:56.365 [2024-11-26 23:53:44.416268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:56.365 [2024-11-26 23:53:44.416274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:56.365 [2024-11-26 23:53:44.416282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.365 [2024-11-26 23:53:44.416290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:56.365 [2024-11-26 23:53:44.416298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:19:56.365 [2024-11-26 23:53:44.416307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.365 [2024-11-26 23:53:44.419404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.365 [2024-11-26 23:53:44.419429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:56.365 [2024-11-26 23:53:44.419444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.061 ms 00:19:56.365 [2024-11-26 23:53:44.419452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.365 [2024-11-26 23:53:44.419606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.365 [2024-11-26 23:53:44.419615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:56.365 [2024-11-26 23:53:44.419625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:56.365 [2024-11-26 23:53:44.419633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.365 [2024-11-26 23:53:44.430000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.365 [2024-11-26 23:53:44.430192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.365 [2024-11-26 23:53:44.430219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.365 [2024-11-26 23:53:44.430228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.365 [2024-11-26 23:53:44.430328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.365 [2024-11-26 23:53:44.430339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.365 [2024-11-26 23:53:44.430347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.365 [2024-11-26 23:53:44.430356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.365 [2024-11-26 23:53:44.430411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.365 [2024-11-26 23:53:44.430421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.365 [2024-11-26 23:53:44.430430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.365 [2024-11-26 23:53:44.430440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.365 [2024-11-26 23:53:44.430464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.365 [2024-11-26 23:53:44.430474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.365 [2024-11-26 23:53:44.430483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.430491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.450328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.366 [2024-11-26 23:53:44.450386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.366 [2024-11-26 23:53:44.450406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.450416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.465993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.366 [2024-11-26 23:53:44.466051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.366 [2024-11-26 23:53:44.466077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.466087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.466153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.366 [2024-11-26 23:53:44.466164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.366 [2024-11-26 23:53:44.466173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.466182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.466226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.366 [2024-11-26 23:53:44.466235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.366 [2024-11-26 23:53:44.466245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.466254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.466345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.366 [2024-11-26 23:53:44.466356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.366 [2024-11-26 23:53:44.466378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.466387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.466424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.366 [2024-11-26 23:53:44.466434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:56.366 [2024-11-26 23:53:44.466443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.466451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.466510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.366 [2024-11-26 23:53:44.466527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.366 [2024-11-26 23:53:44.466536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.466544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.466609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.366 [2024-11-26 23:53:44.466620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.366 [2024-11-26 23:53:44.466630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.366 [2024-11-26 23:53:44.466639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.366 [2024-11-26 23:53:44.466885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.463 ms, result 0 00:19:56.627 00:19:56.627 00:19:56.627 23:53:44 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:56.627 23:53:44 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:57.200 23:53:45 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.462 [2024-11-26 23:53:45.393919] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:19:57.462 [2024-11-26 23:53:45.394081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87940 ] 00:19:57.462 [2024-11-26 23:53:45.541122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.462 [2024-11-26 23:53:45.581413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.724 [2024-11-26 23:53:45.731452] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.724 [2024-11-26 23:53:45.731552] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.989 [2024-11-26 23:53:45.894185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.894408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:57.989 [2024-11-26 23:53:45.894435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:57.989 [2024-11-26 23:53:45.894445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.897161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.897214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.989 [2024-11-26 23:53:45.897226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:19:57.989 [2024-11-26 23:53:45.897237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.897353] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:57.989 [2024-11-26 23:53:45.897630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:57.989 [2024-11-26 23:53:45.897650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.897659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.989 [2024-11-26 23:53:45.897669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:19:57.989 [2024-11-26 23:53:45.897677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.900123] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:57.989 [2024-11-26 23:53:45.905006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.905200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:57.989 [2024-11-26 23:53:45.905374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.877 ms 00:19:57.989 [2024-11-26 23:53:45.905388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.905707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.905759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:57.989 [2024-11-26 23:53:45.905773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:57.989 [2024-11-26 23:53:45.905783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.917110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.917167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.989 [2024-11-26 23:53:45.917179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.240 ms 00:19:57.989 [2024-11-26 23:53:45.917188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.917349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.917366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.989 [2024-11-26 23:53:45.917376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:57.989 [2024-11-26 23:53:45.917388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.917417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.917426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:57.989 [2024-11-26 23:53:45.917435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:57.989 [2024-11-26 23:53:45.917442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.917465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:57.989 [2024-11-26 23:53:45.920165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.920342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.989 [2024-11-26 23:53:45.920359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:19:57.989 [2024-11-26 23:53:45.920373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.920429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.920439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:57.989 [2024-11-26 23:53:45.920448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:57.989 [2024-11-26 23:53:45.920462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.920482] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:57.989 [2024-11-26 23:53:45.920506] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:57.989 [2024-11-26 23:53:45.920551] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:57.989 [2024-11-26 23:53:45.920581] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:57.989 [2024-11-26 23:53:45.920692] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:57.989 [2024-11-26 23:53:45.920704] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:57.989 [2024-11-26 23:53:45.920716] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:57.989 [2024-11-26 23:53:45.920726] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:57.989 [2024-11-26 23:53:45.920735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:57.989 [2024-11-26 23:53:45.920744] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:57.989 [2024-11-26 23:53:45.920759] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:57.989 [2024-11-26 23:53:45.920767] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:57.989 [2024-11-26 23:53:45.920777] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:57.989 [2024-11-26 23:53:45.920788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.920818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:57.989 [2024-11-26 23:53:45.920827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:19:57.989 [2024-11-26 23:53:45.920839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.920930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.989 [2024-11-26 23:53:45.920948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:57.989 [2024-11-26 23:53:45.920957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:57.989 [2024-11-26 23:53:45.920966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.989 [2024-11-26 23:53:45.921069] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:57.989 [2024-11-26 23:53:45.921084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:57.989 [2024-11-26 23:53:45.921094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.989 [2024-11-26 23:53:45.921104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.989 [2024-11-26 23:53:45.921112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:57.989 [2024-11-26 23:53:45.921120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:57.989 [2024-11-26 23:53:45.921129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:57.989 [2024-11-26 23:53:45.921139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:57.989 [2024-11-26 23:53:45.921148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.990 [2024-11-26 23:53:45.921169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:57.990 [2024-11-26 23:53:45.921177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:57.990 [2024-11-26 23:53:45.921186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.990 [2024-11-26 23:53:45.921194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:57.990 [2024-11-26 23:53:45.921202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:57.990 [2024-11-26 23:53:45.921209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:57.990 [2024-11-26 23:53:45.921225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:57.990 [2024-11-26 23:53:45.921233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:57.990 [2024-11-26 23:53:45.921249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.990 [2024-11-26 23:53:45.921265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:57.990 [2024-11-26 23:53:45.921278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.990 [2024-11-26 23:53:45.921293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:57.990 [2024-11-26 23:53:45.921301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.990 [2024-11-26 23:53:45.921316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:57.990 [2024-11-26 23:53:45.921323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.990 [2024-11-26 23:53:45.921338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:57.990 [2024-11-26 23:53:45.921345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.990 [2024-11-26 23:53:45.921359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:57.990 [2024-11-26 23:53:45.921365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:57.990 [2024-11-26 23:53:45.921371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.990 [2024-11-26 23:53:45.921377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:57.990 [2024-11-26 23:53:45.921384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:57.990 [2024-11-26 23:53:45.921392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:57.990 [2024-11-26 23:53:45.921409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:57.990 [2024-11-26 23:53:45.921417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921423] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:57.990 [2024-11-26 23:53:45.921435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:57.990 [2024-11-26 23:53:45.921446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.990 [2024-11-26 23:53:45.921454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.990 [2024-11-26 23:53:45.921462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:57.990 [2024-11-26 23:53:45.921468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:57.990 [2024-11-26 23:53:45.921475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:57.990 [2024-11-26 23:53:45.921482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:57.990 [2024-11-26 23:53:45.921489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:57.990 [2024-11-26 23:53:45.921495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:57.990 [2024-11-26 23:53:45.921504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:57.990 [2024-11-26 23:53:45.921514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.990 [2024-11-26 23:53:45.921524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:57.990 [2024-11-26 23:53:45.921532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:57.990 [2024-11-26 23:53:45.921540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:57.990 [2024-11-26 23:53:45.921546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:57.990 [2024-11-26 23:53:45.921553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:57.990 [2024-11-26 23:53:45.921560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:57.990 [2024-11-26 23:53:45.921568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:57.990 [2024-11-26 23:53:45.921582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:57.990 [2024-11-26 23:53:45.921589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:57.990 [2024-11-26 23:53:45.921596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:57.990 [2024-11-26 23:53:45.921605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:57.990 [2024-11-26 23:53:45.921613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:57.990 [2024-11-26 23:53:45.921620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:57.990 [2024-11-26 23:53:45.921628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:57.990 [2024-11-26 23:53:45.921636] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:57.990 [2024-11-26 23:53:45.921648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.990 [2024-11-26 23:53:45.921659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:57.990 [2024-11-26 23:53:45.921666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:57.990 [2024-11-26 23:53:45.921674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:57.990 [2024-11-26 23:53:45.921682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:57.990 [2024-11-26 23:53:45.921690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.990 [2024-11-26 23:53:45.921701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:57.990 [2024-11-26 23:53:45.921710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:19:57.990 [2024-11-26 23:53:45.921718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.990 [2024-11-26 23:53:45.941636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.990 [2024-11-26 23:53:45.941682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.990 [2024-11-26 23:53:45.941694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.780 ms 00:19:57.990 [2024-11-26 23:53:45.941703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.990 [2024-11-26 23:53:45.941895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.990 [2024-11-26 23:53:45.941917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.990 [2024-11-26 23:53:45.941927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:57.990 [2024-11-26 23:53:45.941936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.990 [2024-11-26 23:53:45.967435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.990 [2024-11-26 23:53:45.967496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.990 [2024-11-26 23:53:45.967511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.472 ms 00:19:57.990 [2024-11-26 23:53:45.967522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.990 [2024-11-26 23:53:45.967639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.990 [2024-11-26 23:53:45.967653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.990 [2024-11-26 23:53:45.967665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:57.990 [2024-11-26 23:53:45.967675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.990 [2024-11-26 23:53:45.968397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.990 [2024-11-26 23:53:45.968428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.990 [2024-11-26 23:53:45.968451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:19:57.990 [2024-11-26 23:53:45.968463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.990 [2024-11-26 23:53:45.968655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.990 [2024-11-26 23:53:45.968674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.990 [2024-11-26 23:53:45.968684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:19:57.990 [2024-11-26 23:53:45.968694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.990 [2024-11-26 23:53:45.980488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.990 [2024-11-26 23:53:45.980533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.990 [2024-11-26 23:53:45.980551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.764 ms 00:19:57.990 [2024-11-26 23:53:45.980559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.990 [2024-11-26 23:53:45.985334] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:57.990 [2024-11-26 23:53:45.985385] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:57.990 [2024-11-26 23:53:45.985399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:45.985408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:57.991 [2024-11-26 23:53:45.985417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.703 ms 00:19:57.991 [2024-11-26 23:53:45.985426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.001481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.001669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:57.991 [2024-11-26 23:53:46.001700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.974 ms 00:19:57.991 [2024-11-26 23:53:46.001709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.004757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.004925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:57.991 [2024-11-26 23:53:46.004944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.920 ms 00:19:57.991 [2024-11-26 23:53:46.004953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.007730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.007780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:57.991 [2024-11-26 23:53:46.007811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.718 ms 00:19:57.991 [2024-11-26 23:53:46.007821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.008200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.008219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.991 [2024-11-26 23:53:46.008231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:57.991 [2024-11-26 23:53:46.008239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.036396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.036467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:57.991 [2024-11-26 23:53:46.036483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.130 ms 00:19:57.991 [2024-11-26 23:53:46.036492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.045249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.991 [2024-11-26 23:53:46.070305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.070373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.991 [2024-11-26 23:53:46.070389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.699 ms 00:19:57.991 [2024-11-26 23:53:46.070398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.070511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.070524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:57.991 [2024-11-26 23:53:46.070539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:57.991 [2024-11-26 23:53:46.070549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.070623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.070633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.991 [2024-11-26 23:53:46.070643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:57.991 [2024-11-26 23:53:46.070651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.070683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.070694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.991 [2024-11-26 23:53:46.070703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:57.991 [2024-11-26 23:53:46.070715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.070763] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:57.991 [2024-11-26 23:53:46.070779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.070815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:57.991 [2024-11-26 23:53:46.070825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:57.991 [2024-11-26 23:53:46.070834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.077533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.077586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.991 [2024-11-26 23:53:46.077608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.671 ms 00:19:57.991 [2024-11-26 23:53:46.077618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.077723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.991 [2024-11-26 23:53:46.077736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.991 [2024-11-26 23:53:46.077765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:57.991 [2024-11-26 23:53:46.077775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.991 [2024-11-26 23:53:46.079320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.991 [2024-11-26 23:53:46.080757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 184.773 ms, result 0 00:19:57.991 [2024-11-26 23:53:46.082090] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.991 [2024-11-26 23:53:46.089527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.566  [2024-11-26T23:53:46.697Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-11-26 23:53:46.393591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.566 [2024-11-26 23:53:46.394774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.394842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:58.566 [2024-11-26 23:53:46.394856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:58.566 [2024-11-26 23:53:46.394864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.394894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:58.566 [2024-11-26 23:53:46.395823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.395851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:58.566 [2024-11-26 23:53:46.395863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:19:58.566 [2024-11-26 23:53:46.395872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.397971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.398023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:58.566 [2024-11-26 23:53:46.398041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:19:58.566 [2024-11-26 23:53:46.398049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.402445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.402480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:58.566 [2024-11-26 23:53:46.402491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.372 ms 00:19:58.566 [2024-11-26 23:53:46.402500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.409477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.409516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:58.566 [2024-11-26 23:53:46.409537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.943 ms 00:19:58.566 [2024-11-26 23:53:46.409548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.412810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.412856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:58.566 [2024-11-26 23:53:46.412866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:19:58.566 [2024-11-26 23:53:46.412873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.417844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.417891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:58.566 [2024-11-26 23:53:46.417901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.926 ms 00:19:58.566 [2024-11-26 23:53:46.417909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.418027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.418036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:58.566 [2024-11-26 23:53:46.418053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:58.566 [2024-11-26 23:53:46.418067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.421310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.421356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:58.566 [2024-11-26 23:53:46.421366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:19:58.566 [2024-11-26 23:53:46.421373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.424246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.424423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:58.566 [2024-11-26 23:53:46.424442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:19:58.566 [2024-11-26 23:53:46.424449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.426543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.426589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:58.566 [2024-11-26 23:53:46.426599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:19:58.566 [2024-11-26 23:53:46.426606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.428705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.566 [2024-11-26 23:53:46.428752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:58.566 [2024-11-26 23:53:46.428762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:19:58.566 [2024-11-26 23:53:46.428770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.566 [2024-11-26 23:53:46.428824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:58.566 [2024-11-26 23:53:46.428842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:58.566 [2024-11-26 23:53:46.428967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.428974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.428982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.428990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.428997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:58.567 [2024-11-26 23:53:46.429650] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:58.567 [2024-11-26 23:53:46.429659] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f06a1c5-8be6-4e88-9b85-7277c6542241 00:19:58.567 [2024-11-26 23:53:46.429668] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:58.567 [2024-11-26 23:53:46.429676] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:58.567 [2024-11-26 23:53:46.429684] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:58.567 [2024-11-26 23:53:46.429700] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:58.567 [2024-11-26 23:53:46.429709] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:58.567 [2024-11-26 23:53:46.429720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:58.568 [2024-11-26 23:53:46.429729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:58.568 [2024-11-26 23:53:46.429736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:58.568 [2024-11-26 23:53:46.429757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:58.568 [2024-11-26 23:53:46.429764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 23:53:46.429773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:58.568 [2024-11-26 23:53:46.429786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:19:58.568 [2024-11-26 23:53:46.429806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.432420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 23:53:46.432450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:58.568 [2024-11-26 23:53:46.432460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:19:58.568 [2024-11-26 23:53:46.432473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.432637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-11-26 23:53:46.432646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:58.568 [2024-11-26 23:53:46.432660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:58.568 [2024-11-26 23:53:46.432668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.443033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.443084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.568 [2024-11-26 23:53:46.443102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.443111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.443196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.443206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.568 [2024-11-26 23:53:46.443215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.443224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.443281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.443291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.568 [2024-11-26 23:53:46.443299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.443307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.443329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.443339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.568 [2024-11-26 23:53:46.443348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.443356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.462836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.462891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.568 [2024-11-26 23:53:46.462904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.462926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.478219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.478454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.568 [2024-11-26 23:53:46.478475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.478485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.478564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.478578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.568 [2024-11-26 23:53:46.478588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.478606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.478656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.478667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.568 [2024-11-26 23:53:46.478676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.478686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.478782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.478818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.568 [2024-11-26 23:53:46.478836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.478846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.478892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.478907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:58.568 [2024-11-26 23:53:46.478917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.478926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.478982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.478993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.568 [2024-11-26 23:53:46.479002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.479011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.479079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.568 [2024-11-26 23:53:46.479095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.568 [2024-11-26 23:53:46.479107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.568 [2024-11-26 23:53:46.479117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.568 [2024-11-26 23:53:46.479310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.486 ms, result 0 00:19:58.830 00:19:58.830 00:19:58.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.830 23:53:46 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87958 00:19:58.830 23:53:46 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:58.830 23:53:46 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87958 00:19:58.830 23:53:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87958 ']' 00:19:58.830 23:53:46 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.830 23:53:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.830 23:53:46 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.830 23:53:46 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.830 23:53:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:58.830 [2024-11-26 23:53:46.835366] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:19:58.830 [2024-11-26 23:53:46.835540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87958 ] 00:19:59.091 [2024-11-26 23:53:46.985734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.091 [2024-11-26 23:53:47.026143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.665 23:53:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.665 23:53:47 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:59.665 23:53:47 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:59.927 [2024-11-26 23:53:47.901134] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:59.927 [2024-11-26 23:53:47.901225] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.221 [2024-11-26 23:53:48.079982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.080045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.221 [2024-11-26 23:53:48.080063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:00.221 [2024-11-26 23:53:48.080075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.082858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.083052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.221 [2024-11-26 23:53:48.083076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:20:00.221 [2024-11-26 23:53:48.083087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.083313] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.221 [2024-11-26 23:53:48.083629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.221 [2024-11-26 23:53:48.083648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.083661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.221 [2024-11-26 23:53:48.083674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:20:00.221 [2024-11-26 23:53:48.083685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.086172] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.221 [2024-11-26 23:53:48.091037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.091095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.221 [2024-11-26 23:53:48.091121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.861 ms 00:20:00.221 [2024-11-26 23:53:48.091139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.091240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.091252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.221 [2024-11-26 23:53:48.091267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:00.221 [2024-11-26 23:53:48.091278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.102605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.102639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.221 [2024-11-26 23:53:48.102653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.266 ms 00:20:00.221 [2024-11-26 23:53:48.102662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.102843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.102856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.221 [2024-11-26 23:53:48.102877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:00.221 [2024-11-26 23:53:48.102885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.102929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.102938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.221 [2024-11-26 23:53:48.102949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:00.221 [2024-11-26 23:53:48.102960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.102992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:00.221 [2024-11-26 23:53:48.105628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.105854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.221 [2024-11-26 23:53:48.105873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:20:00.221 [2024-11-26 23:53:48.105885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.105942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.221 [2024-11-26 23:53:48.105955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.221 [2024-11-26 23:53:48.105966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:00.221 [2024-11-26 23:53:48.105977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.221 [2024-11-26 23:53:48.106000] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.221 [2024-11-26 23:53:48.106028] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.221 [2024-11-26 23:53:48.106072] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.221 [2024-11-26 23:53:48.106092] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:00.222 [2024-11-26 23:53:48.106206] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.222 [2024-11-26 23:53:48.106220] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.222 [2024-11-26 23:53:48.106232] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.222 [2024-11-26 23:53:48.106244] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106254] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106270] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:00.222 [2024-11-26 23:53:48.106280] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.222 [2024-11-26 23:53:48.106291] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.222 [2024-11-26 23:53:48.106302] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.222 [2024-11-26 23:53:48.106312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.222 [2024-11-26 23:53:48.106320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.222 [2024-11-26 23:53:48.106330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:00.222 [2024-11-26 23:53:48.106338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.222 [2024-11-26 23:53:48.106428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.222 [2024-11-26 23:53:48.106439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.222 [2024-11-26 23:53:48.106451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:00.222 [2024-11-26 23:53:48.106459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.222 [2024-11-26 23:53:48.106568] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.222 [2024-11-26 23:53:48.106579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.222 [2024-11-26 23:53:48.106591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.222 [2024-11-26 23:53:48.106621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.222 [2024-11-26 23:53:48.106649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.222 [2024-11-26 23:53:48.106669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.222 [2024-11-26 23:53:48.106677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:00.222 [2024-11-26 23:53:48.106687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.222 [2024-11-26 23:53:48.106697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.222 [2024-11-26 23:53:48.106708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:00.222 [2024-11-26 23:53:48.106715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.222 [2024-11-26 23:53:48.106734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.222 [2024-11-26 23:53:48.106765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.222 [2024-11-26 23:53:48.106811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.222 [2024-11-26 23:53:48.106840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.222 [2024-11-26 23:53:48.106864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.222 [2024-11-26 23:53:48.106880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.222 [2024-11-26 23:53:48.106889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.222 [2024-11-26 23:53:48.106905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.222 [2024-11-26 23:53:48.106912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:00.222 [2024-11-26 23:53:48.106925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.222 [2024-11-26 23:53:48.106932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.222 [2024-11-26 23:53:48.106941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:00.222 [2024-11-26 23:53:48.106947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.222 [2024-11-26 23:53:48.106963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:00.222 [2024-11-26 23:53:48.106972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.222 [2024-11-26 23:53:48.106979] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.222 [2024-11-26 23:53:48.106989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.222 [2024-11-26 23:53:48.107001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.222 [2024-11-26 23:53:48.107013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.222 [2024-11-26 23:53:48.107021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.222 [2024-11-26 23:53:48.107031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.222 [2024-11-26 23:53:48.107038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.222 [2024-11-26 23:53:48.107049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.222 [2024-11-26 23:53:48.107056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.222 [2024-11-26 23:53:48.107067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.222 [2024-11-26 23:53:48.107077] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.222 [2024-11-26 23:53:48.107103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.222 [2024-11-26 23:53:48.107112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:00.222 [2024-11-26 23:53:48.107122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:00.222 [2024-11-26 23:53:48.107129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:00.222 [2024-11-26 23:53:48.107139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:00.222 [2024-11-26 23:53:48.107147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:00.222 [2024-11-26 23:53:48.107157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:00.222 [2024-11-26 23:53:48.107165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:00.222 [2024-11-26 23:53:48.107174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:00.222 [2024-11-26 23:53:48.107181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:00.222 [2024-11-26 23:53:48.107202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:00.222 [2024-11-26 23:53:48.107210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:00.222 [2024-11-26 23:53:48.107226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:00.222 [2024-11-26 23:53:48.107233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:00.222 [2024-11-26 23:53:48.107246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:00.222 [2024-11-26 23:53:48.107254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.222 [2024-11-26 23:53:48.107265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.222 [2024-11-26 23:53:48.107274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.222 [2024-11-26 23:53:48.107284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.222 [2024-11-26 23:53:48.107291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.222 [2024-11-26 23:53:48.107302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.222 [2024-11-26 23:53:48.107310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.222 [2024-11-26 23:53:48.107324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.222 [2024-11-26 23:53:48.107338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:20:00.222 [2024-11-26 23:53:48.107351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.222 [2024-11-26 23:53:48.127319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.127371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.223 [2024-11-26 23:53:48.127385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.882 ms 00:20:00.223 [2024-11-26 23:53:48.127399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.127545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.127562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.223 [2024-11-26 23:53:48.127572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:00.223 [2024-11-26 23:53:48.127582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.144673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.144728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.223 [2024-11-26 23:53:48.144743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.060 ms 00:20:00.223 [2024-11-26 23:53:48.144754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.144849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.144864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.223 [2024-11-26 23:53:48.144874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:00.223 [2024-11-26 23:53:48.144885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.145569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.145614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.223 [2024-11-26 23:53:48.145626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:20:00.223 [2024-11-26 23:53:48.145642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.145844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.145861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.223 [2024-11-26 23:53:48.145870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:20:00.223 [2024-11-26 23:53:48.145887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.157406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.157458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.223 [2024-11-26 23:53:48.157469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.493 ms 00:20:00.223 [2024-11-26 23:53:48.157480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.171243] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:00.223 [2024-11-26 23:53:48.171463] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:00.223 [2024-11-26 23:53:48.171487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.171502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:00.223 [2024-11-26 23:53:48.171514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.872 ms 00:20:00.223 [2024-11-26 23:53:48.171526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.187911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.187966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:00.223 [2024-11-26 23:53:48.187979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.283 ms 00:20:00.223 [2024-11-26 23:53:48.187997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.190999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.191050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:00.223 [2024-11-26 23:53:48.191061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.907 ms 00:20:00.223 [2024-11-26 23:53:48.191071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.193774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.193844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:00.223 [2024-11-26 23:53:48.193855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:20:00.223 [2024-11-26 23:53:48.193866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.194228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.194244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:00.223 [2024-11-26 23:53:48.194253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:00.223 [2024-11-26 23:53:48.194264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.222648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.222722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.223 [2024-11-26 23:53:48.222736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.360 ms 00:20:00.223 [2024-11-26 23:53:48.222754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.231195] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:00.223 [2024-11-26 23:53:48.255146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.255204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.223 [2024-11-26 23:53:48.255220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.275 ms 00:20:00.223 [2024-11-26 23:53:48.255229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.255335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.255345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.223 [2024-11-26 23:53:48.255359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:00.223 [2024-11-26 23:53:48.255367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.255446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.255456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.223 [2024-11-26 23:53:48.255468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:00.223 [2024-11-26 23:53:48.255477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.255508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.255520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.223 [2024-11-26 23:53:48.255538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:00.223 [2024-11-26 23:53:48.255547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.255590] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.223 [2024-11-26 23:53:48.255601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.255611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.223 [2024-11-26 23:53:48.255620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:00.223 [2024-11-26 23:53:48.255631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.262539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.262599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.223 [2024-11-26 23:53:48.262615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.884 ms 00:20:00.223 [2024-11-26 23:53:48.262626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.262726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.223 [2024-11-26 23:53:48.262740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.223 [2024-11-26 23:53:48.262750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:00.223 [2024-11-26 23:53:48.262761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.223 [2024-11-26 23:53:48.264056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.223 [2024-11-26 23:53:48.265482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 183.655 ms, result 0 00:20:00.223 [2024-11-26 23:53:48.268324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.223 Some configs were skipped because the RPC state that can call them passed over. 00:20:00.223 23:53:48 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:00.489 [2024-11-26 23:53:48.509514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.489 [2024-11-26 23:53:48.509713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:00.489 [2024-11-26 23:53:48.509821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:20:00.489 [2024-11-26 23:53:48.509851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.489 [2024-11-26 23:53:48.509921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.435 ms, result 0 00:20:00.489 true 00:20:00.489 23:53:48 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:00.751 [2024-11-26 23:53:48.729394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.751 [2024-11-26 23:53:48.729584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:00.751 [2024-11-26 23:53:48.729647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:20:00.751 [2024-11-26 23:53:48.729674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.751 [2024-11-26 23:53:48.729735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.021 ms, result 0 00:20:00.751 true 00:20:00.751 23:53:48 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87958 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87958 ']' 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87958 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87958 00:20:00.751 killing process with pid 87958 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87958' 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87958 00:20:00.751 23:53:48 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87958 00:20:01.013 [2024-11-26 23:53:48.983296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.013 [2024-11-26 23:53:48.983367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:01.013 [2024-11-26 23:53:48.983386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:01.013 [2024-11-26 23:53:48.983398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.013 [2024-11-26 23:53:48.983431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:01.013 [2024-11-26 23:53:48.984376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.013 [2024-11-26 23:53:48.984420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:01.013 [2024-11-26 23:53:48.984432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:20:01.013 [2024-11-26 23:53:48.984445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.013 [2024-11-26 23:53:48.984762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.013 [2024-11-26 23:53:48.984776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:01.013 [2024-11-26 23:53:48.984785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:01.013 [2024-11-26 23:53:48.984816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.013 [2024-11-26 23:53:48.989500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.013 [2024-11-26 23:53:48.989551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:01.013 [2024-11-26 23:53:48.989569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.663 ms 00:20:01.013 [2024-11-26 23:53:48.989580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.013 [2024-11-26 23:53:48.996747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.013 [2024-11-26 23:53:48.996817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:01.013 [2024-11-26 23:53:48.996830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.122 ms 00:20:01.013 [2024-11-26 23:53:48.996844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.013 [2024-11-26 23:53:48.999909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.013 [2024-11-26 23:53:49.000090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:01.013 [2024-11-26 23:53:49.000110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:20:01.013 [2024-11-26 23:53:49.000121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.013 [2024-11-26 23:53:49.005532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.013 [2024-11-26 23:53:49.005584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:01.013 [2024-11-26 23:53:49.005598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.362 ms 00:20:01.014 [2024-11-26 23:53:49.005610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.014 [2024-11-26 23:53:49.005822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.014 [2024-11-26 23:53:49.005838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:01.014 [2024-11-26 23:53:49.005848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:20:01.014 [2024-11-26 23:53:49.005858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.014 [2024-11-26 23:53:49.008596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.014 [2024-11-26 23:53:49.008651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:01.014 [2024-11-26 23:53:49.008662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:20:01.014 [2024-11-26 23:53:49.008678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.014 [2024-11-26 23:53:49.011487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.014 [2024-11-26 23:53:49.011653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:01.014 [2024-11-26 23:53:49.011671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:20:01.014 [2024-11-26 23:53:49.011681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.014 [2024-11-26 23:53:49.014079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.014 [2024-11-26 23:53:49.014132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:01.014 [2024-11-26 23:53:49.014142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.354 ms 00:20:01.014 [2024-11-26 23:53:49.014151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.014 [2024-11-26 23:53:49.015891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.014 [2024-11-26 23:53:49.015943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:01.014 [2024-11-26 23:53:49.015952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:20:01.014 [2024-11-26 23:53:49.015962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.014 [2024-11-26 23:53:49.016006] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:01.014 [2024-11-26 23:53:49.016027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:01.014 [2024-11-26 23:53:49.016687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:01.015 [2024-11-26 23:53:49.016993] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:01.015 [2024-11-26 23:53:49.017004] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f06a1c5-8be6-4e88-9b85-7277c6542241 00:20:01.015 [2024-11-26 23:53:49.017017] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:01.015 [2024-11-26 23:53:49.017025] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:01.015 [2024-11-26 23:53:49.017035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:01.015 [2024-11-26 23:53:49.017043] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:01.015 [2024-11-26 23:53:49.017058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:01.015 [2024-11-26 23:53:49.017067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:01.015 [2024-11-26 23:53:49.017077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:01.015 [2024-11-26 23:53:49.017084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:01.015 [2024-11-26 23:53:49.017092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:01.015 [2024-11-26 23:53:49.017100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.015 [2024-11-26 23:53:49.017117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:01.015 [2024-11-26 23:53:49.017126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:20:01.015 [2024-11-26 23:53:49.017138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.020188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.015 [2024-11-26 23:53:49.020219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:01.015 [2024-11-26 23:53:49.020230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.026 ms 00:20:01.015 [2024-11-26 23:53:49.020242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.020431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.015 [2024-11-26 23:53:49.020444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:01.015 [2024-11-26 23:53:49.020454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:01.015 [2024-11-26 23:53:49.020468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.031233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.031290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.015 [2024-11-26 23:53:49.031302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.031313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.031410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.031423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.015 [2024-11-26 23:53:49.031433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.031450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.031500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.031514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.015 [2024-11-26 23:53:49.031522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.031533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.031555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.031566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.015 [2024-11-26 23:53:49.031575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.031586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.052336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.052399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.015 [2024-11-26 23:53:49.052413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.052434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.068174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.068238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.015 [2024-11-26 23:53:49.068251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.068271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.068355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.068370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.015 [2024-11-26 23:53:49.068379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.068391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.068431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.068443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.015 [2024-11-26 23:53:49.068453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.068464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.068558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.068572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.015 [2024-11-26 23:53:49.068582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.068598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.068645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.068659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:01.015 [2024-11-26 23:53:49.068670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.068682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.068737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.068749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.015 [2024-11-26 23:53:49.068757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.068767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.068861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.015 [2024-11-26 23:53:49.068877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.015 [2024-11-26 23:53:49.068888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.015 [2024-11-26 23:53:49.068899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.015 [2024-11-26 23:53:49.069092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.762 ms, result 0 00:20:01.275 23:53:49 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:01.535 [2024-11-26 23:53:49.459869] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:20:01.535 [2024-11-26 23:53:49.460295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88001 ] 00:20:01.535 [2024-11-26 23:53:49.608673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.535 [2024-11-26 23:53:49.647711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.796 [2024-11-26 23:53:49.799751] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.796 [2024-11-26 23:53:49.799872] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.058 [2024-11-26 23:53:49.963870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.964243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:02.058 [2024-11-26 23:53:49.964275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:02.058 [2024-11-26 23:53:49.964288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.967528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.967857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.058 [2024-11-26 23:53:49.967883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:20:02.058 [2024-11-26 23:53:49.967892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.968116] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:02.058 [2024-11-26 23:53:49.968436] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:02.058 [2024-11-26 23:53:49.968460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.968471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.058 [2024-11-26 23:53:49.968482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:20:02.058 [2024-11-26 23:53:49.968495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.970841] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:02.058 [2024-11-26 23:53:49.975669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.975718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:02.058 [2024-11-26 23:53:49.975736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:20:02.058 [2024-11-26 23:53:49.975745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.975865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.975877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:02.058 [2024-11-26 23:53:49.975888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:02.058 [2024-11-26 23:53:49.975897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.987106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.987150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.058 [2024-11-26 23:53:49.987171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.161 ms 00:20:02.058 [2024-11-26 23:53:49.987180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.987333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.987346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.058 [2024-11-26 23:53:49.987357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:02.058 [2024-11-26 23:53:49.987369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.987400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.987410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:02.058 [2024-11-26 23:53:49.987420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:02.058 [2024-11-26 23:53:49.987428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.987458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:02.058 [2024-11-26 23:53:49.990127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.990300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.058 [2024-11-26 23:53:49.990328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.676 ms 00:20:02.058 [2024-11-26 23:53:49.990342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.990416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.990427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:02.058 [2024-11-26 23:53:49.990437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:02.058 [2024-11-26 23:53:49.990446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.990476] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:02.058 [2024-11-26 23:53:49.990500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:02.058 [2024-11-26 23:53:49.990547] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:02.058 [2024-11-26 23:53:49.990571] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:02.058 [2024-11-26 23:53:49.990685] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:02.058 [2024-11-26 23:53:49.990697] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:02.058 [2024-11-26 23:53:49.990708] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:02.058 [2024-11-26 23:53:49.990723] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:02.058 [2024-11-26 23:53:49.990734] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:02.058 [2024-11-26 23:53:49.990743] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:02.058 [2024-11-26 23:53:49.990752] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:02.058 [2024-11-26 23:53:49.990760] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:02.058 [2024-11-26 23:53:49.990775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:02.058 [2024-11-26 23:53:49.990808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.990817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:02.058 [2024-11-26 23:53:49.990826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:02.058 [2024-11-26 23:53:49.990839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.990935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.058 [2024-11-26 23:53:49.990944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:02.058 [2024-11-26 23:53:49.990952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:02.058 [2024-11-26 23:53:49.990960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.058 [2024-11-26 23:53:49.991061] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:02.058 [2024-11-26 23:53:49.991077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:02.059 [2024-11-26 23:53:49.991090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:02.059 [2024-11-26 23:53:49.991116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:02.059 [2024-11-26 23:53:49.991143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.059 [2024-11-26 23:53:49.991158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:02.059 [2024-11-26 23:53:49.991169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:02.059 [2024-11-26 23:53:49.991176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.059 [2024-11-26 23:53:49.991184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:02.059 [2024-11-26 23:53:49.991191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:02.059 [2024-11-26 23:53:49.991199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:02.059 [2024-11-26 23:53:49.991216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:02.059 [2024-11-26 23:53:49.991236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:02.059 [2024-11-26 23:53:49.991262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:02.059 [2024-11-26 23:53:49.991282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:02.059 [2024-11-26 23:53:49.991301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:02.059 [2024-11-26 23:53:49.991321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.059 [2024-11-26 23:53:49.991335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:02.059 [2024-11-26 23:53:49.991342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:02.059 [2024-11-26 23:53:49.991348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.059 [2024-11-26 23:53:49.991355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:02.059 [2024-11-26 23:53:49.991362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:02.059 [2024-11-26 23:53:49.991372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:02.059 [2024-11-26 23:53:49.991385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:02.059 [2024-11-26 23:53:49.991392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991405] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:02.059 [2024-11-26 23:53:49.991413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:02.059 [2024-11-26 23:53:49.991422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.059 [2024-11-26 23:53:49.991438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:02.059 [2024-11-26 23:53:49.991445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:02.059 [2024-11-26 23:53:49.991452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:02.059 [2024-11-26 23:53:49.991458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:02.059 [2024-11-26 23:53:49.991465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:02.059 [2024-11-26 23:53:49.991472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:02.059 [2024-11-26 23:53:49.991481] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:02.059 [2024-11-26 23:53:49.991491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.059 [2024-11-26 23:53:49.991502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:02.059 [2024-11-26 23:53:49.991510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:02.059 [2024-11-26 23:53:49.991517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:02.059 [2024-11-26 23:53:49.991524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:02.059 [2024-11-26 23:53:49.991531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:02.059 [2024-11-26 23:53:49.991539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:02.059 [2024-11-26 23:53:49.991546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:02.059 [2024-11-26 23:53:49.991559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:02.059 [2024-11-26 23:53:49.991568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:02.059 [2024-11-26 23:53:49.991575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:02.059 [2024-11-26 23:53:49.991583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:02.059 [2024-11-26 23:53:49.991592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:02.059 [2024-11-26 23:53:49.991600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:02.059 [2024-11-26 23:53:49.991608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:02.059 [2024-11-26 23:53:49.991616] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:02.059 [2024-11-26 23:53:49.991627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.059 [2024-11-26 23:53:49.991645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:02.059 [2024-11-26 23:53:49.991653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:02.059 [2024-11-26 23:53:49.991660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:02.059 [2024-11-26 23:53:49.991668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:02.059 [2024-11-26 23:53:49.991676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.059 [2024-11-26 23:53:49.991685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:02.059 [2024-11-26 23:53:49.991693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:20:02.059 [2024-11-26 23:53:49.991701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.059 [2024-11-26 23:53:50.011612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.059 [2024-11-26 23:53:50.011669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.059 [2024-11-26 23:53:50.011682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.839 ms 00:20:02.059 [2024-11-26 23:53:50.011692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.059 [2024-11-26 23:53:50.011858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.059 [2024-11-26 23:53:50.011875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:02.059 [2024-11-26 23:53:50.011885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:02.059 [2024-11-26 23:53:50.011894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.059 [2024-11-26 23:53:50.037686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.059 [2024-11-26 23:53:50.037776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.059 [2024-11-26 23:53:50.037814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.767 ms 00:20:02.059 [2024-11-26 23:53:50.037836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.059 [2024-11-26 23:53:50.037950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.059 [2024-11-26 23:53:50.037966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.059 [2024-11-26 23:53:50.037977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:02.059 [2024-11-26 23:53:50.037993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.059 [2024-11-26 23:53:50.038668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.059 [2024-11-26 23:53:50.038698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.059 [2024-11-26 23:53:50.038711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:20:02.059 [2024-11-26 23:53:50.038722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.059 [2024-11-26 23:53:50.038914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.059 [2024-11-26 23:53:50.038929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.059 [2024-11-26 23:53:50.038939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:20:02.060 [2024-11-26 23:53:50.038947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.050487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.050534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.060 [2024-11-26 23:53:50.050551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.515 ms 00:20:02.060 [2024-11-26 23:53:50.050561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.055198] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:02.060 [2024-11-26 23:53:50.055248] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:02.060 [2024-11-26 23:53:50.055262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.055271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:02.060 [2024-11-26 23:53:50.055281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.564 ms 00:20:02.060 [2024-11-26 23:53:50.055289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.071304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.071350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:02.060 [2024-11-26 23:53:50.071363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.889 ms 00:20:02.060 [2024-11-26 23:53:50.071371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.074246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.074293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:02.060 [2024-11-26 23:53:50.074303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.780 ms 00:20:02.060 [2024-11-26 23:53:50.074311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.076862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.076903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:02.060 [2024-11-26 23:53:50.076914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.486 ms 00:20:02.060 [2024-11-26 23:53:50.076921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.077297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.077310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:02.060 [2024-11-26 23:53:50.077319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:02.060 [2024-11-26 23:53:50.077327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.105998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.106065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:02.060 [2024-11-26 23:53:50.106082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.639 ms 00:20:02.060 [2024-11-26 23:53:50.106092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.114718] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:02.060 [2024-11-26 23:53:50.139888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.139952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:02.060 [2024-11-26 23:53:50.139967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.680 ms 00:20:02.060 [2024-11-26 23:53:50.139977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.140096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.140109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:02.060 [2024-11-26 23:53:50.140124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:02.060 [2024-11-26 23:53:50.140132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.140209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.140220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:02.060 [2024-11-26 23:53:50.140235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:02.060 [2024-11-26 23:53:50.140243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.140276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.140287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:02.060 [2024-11-26 23:53:50.140296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.060 [2024-11-26 23:53:50.140308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.140354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:02.060 [2024-11-26 23:53:50.140366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.140375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:02.060 [2024-11-26 23:53:50.140384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:02.060 [2024-11-26 23:53:50.140392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.147192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.147231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:02.060 [2024-11-26 23:53:50.147244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.774 ms 00:20:02.060 [2024-11-26 23:53:50.147253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.147367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.060 [2024-11-26 23:53:50.147380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:02.060 [2024-11-26 23:53:50.147391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:02.060 [2024-11-26 23:53:50.147399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.060 [2024-11-26 23:53:50.148666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.060 [2024-11-26 23:53:50.150082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 184.432 ms, result 0 00:20:02.060 [2024-11-26 23:53:50.152161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.060 [2024-11-26 23:53:50.159407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:03.450  [2024-11-26T23:53:52.526Z] Copying: 14/256 [MB] (14 MBps) [2024-11-26T23:53:53.473Z] Copying: 24/256 [MB] (10 MBps) [2024-11-26T23:53:54.420Z] Copying: 35/256 [MB] (10 MBps) [2024-11-26T23:53:55.367Z] Copying: 45/256 [MB] (10 MBps) [2024-11-26T23:53:56.313Z] Copying: 55/256 [MB] (10 MBps) [2024-11-26T23:53:57.257Z] Copying: 70/256 [MB] (14 MBps) [2024-11-26T23:53:58.632Z] Copying: 86/256 [MB] (15 MBps) [2024-11-26T23:53:59.577Z] Copying: 98/256 [MB] (11 MBps) [2024-11-26T23:54:00.519Z] Copying: 108/256 [MB] (10 MBps) [2024-11-26T23:54:01.460Z] Copying: 121/256 [MB] (12 MBps) [2024-11-26T23:54:02.404Z] Copying: 137/256 [MB] (15 MBps) [2024-11-26T23:54:03.350Z] Copying: 154/256 [MB] (17 MBps) [2024-11-26T23:54:04.295Z] Copying: 170/256 [MB] (16 MBps) [2024-11-26T23:54:05.237Z] Copying: 191/256 [MB] (20 MBps) [2024-11-26T23:54:06.623Z] Copying: 213/256 [MB] (22 MBps) [2024-11-26T23:54:07.570Z] Copying: 231/256 [MB] (17 MBps) [2024-11-26T23:54:07.832Z] Copying: 251/256 [MB] (19 MBps) [2024-11-26T23:54:08.095Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-26 23:54:07.988326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.964 [2024-11-26 23:54:07.990996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:07.991054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:19.964 [2024-11-26 23:54:07.991072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.964 [2024-11-26 23:54:07.991083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:07.991112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:19.964 [2024-11-26 23:54:07.992102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:07.992146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:19.964 [2024-11-26 23:54:07.992160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:20:19.964 [2024-11-26 23:54:07.992171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:07.992501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:07.992522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:19.964 [2024-11-26 23:54:07.992537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:19.964 [2024-11-26 23:54:07.992546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:07.996284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:07.996312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:19.964 [2024-11-26 23:54:07.996325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:20:19.964 [2024-11-26 23:54:07.996334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:08.003689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:08.003735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:19.964 [2024-11-26 23:54:08.003749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.332 ms 00:20:19.964 [2024-11-26 23:54:08.003765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:08.007349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:08.007414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:19.964 [2024-11-26 23:54:08.007428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.477 ms 00:20:19.964 [2024-11-26 23:54:08.007437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:08.013228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:08.013280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:19.964 [2024-11-26 23:54:08.013294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.739 ms 00:20:19.964 [2024-11-26 23:54:08.013303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:08.013449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:08.013462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:19.964 [2024-11-26 23:54:08.013481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:19.964 [2024-11-26 23:54:08.013490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:08.018295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.964 [2024-11-26 23:54:08.018357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:19.964 [2024-11-26 23:54:08.018372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:20:19.964 [2024-11-26 23:54:08.018381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.964 [2024-11-26 23:54:08.020545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.965 [2024-11-26 23:54:08.020594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:19.965 [2024-11-26 23:54:08.020604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:20:19.965 [2024-11-26 23:54:08.020613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.965 [2024-11-26 23:54:08.022039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.965 [2024-11-26 23:54:08.022083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:19.965 [2024-11-26 23:54:08.022095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:20:19.965 [2024-11-26 23:54:08.022104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.965 [2024-11-26 23:54:08.023838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.965 [2024-11-26 23:54:08.023883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:19.965 [2024-11-26 23:54:08.023895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:20:19.965 [2024-11-26 23:54:08.023903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.965 [2024-11-26 23:54:08.024071] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:19.965 [2024-11-26 23:54:08.024104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:19.965 [2024-11-26 23:54:08.024631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.024982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:19.966 [2024-11-26 23:54:08.025000] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:19.966 [2024-11-26 23:54:08.025009] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f06a1c5-8be6-4e88-9b85-7277c6542241 00:20:19.966 [2024-11-26 23:54:08.025018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:19.966 [2024-11-26 23:54:08.025027] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:19.966 [2024-11-26 23:54:08.025035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:19.966 [2024-11-26 23:54:08.025044] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:19.966 [2024-11-26 23:54:08.025052] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:19.966 [2024-11-26 23:54:08.025065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:19.966 [2024-11-26 23:54:08.025073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:19.966 [2024-11-26 23:54:08.025079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:19.966 [2024-11-26 23:54:08.025086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:19.966 [2024-11-26 23:54:08.025095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.966 [2024-11-26 23:54:08.025103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:19.966 [2024-11-26 23:54:08.025112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:20:19.966 [2024-11-26 23:54:08.025120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.966 [2024-11-26 23:54:08.028970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.966 [2024-11-26 23:54:08.029006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:19.966 [2024-11-26 23:54:08.029025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.830 ms 00:20:19.966 [2024-11-26 23:54:08.029038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.966 [2024-11-26 23:54:08.029193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.966 [2024-11-26 23:54:08.029211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:19.966 [2024-11-26 23:54:08.029222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:20:19.966 [2024-11-26 23:54:08.029230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.966 [2024-11-26 23:54:08.039714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.966 [2024-11-26 23:54:08.039763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.966 [2024-11-26 23:54:08.039785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.966 [2024-11-26 23:54:08.039834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.966 [2024-11-26 23:54:08.039929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.966 [2024-11-26 23:54:08.039941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.966 [2024-11-26 23:54:08.039950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.966 [2024-11-26 23:54:08.039958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.966 [2024-11-26 23:54:08.040010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.966 [2024-11-26 23:54:08.040021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.966 [2024-11-26 23:54:08.040030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.966 [2024-11-26 23:54:08.040038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.040061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.040070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.967 [2024-11-26 23:54:08.040080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.040092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.059873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.059930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.967 [2024-11-26 23:54:08.059943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.059960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.075213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.075271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.967 [2024-11-26 23:54:08.075283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.075293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.075358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.075369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.967 [2024-11-26 23:54:08.075378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.075388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.075437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.075451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.967 [2024-11-26 23:54:08.075460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.075469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.075557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.075568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.967 [2024-11-26 23:54:08.075588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.075597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.075633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.075645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:19.967 [2024-11-26 23:54:08.075655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.075663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.075718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.075733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.967 [2024-11-26 23:54:08.075745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.075754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.075841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.967 [2024-11-26 23:54:08.075857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.967 [2024-11-26 23:54:08.075868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.967 [2024-11-26 23:54:08.075882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.967 [2024-11-26 23:54:08.076080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.046 ms, result 0 00:20:20.228 00:20:20.228 00:20:20.228 23:54:08 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:20.801 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:20.801 23:54:08 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:20.801 23:54:08 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:20.801 23:54:08 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:21.062 23:54:08 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:21.062 23:54:08 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:21.062 23:54:08 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:21.062 23:54:09 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87958 00:20:21.062 23:54:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87958 ']' 00:20:21.062 23:54:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87958 00:20:21.062 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87958) - No such process 00:20:21.062 Process with pid 87958 is not found 00:20:21.062 23:54:09 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 87958 is not found' 00:20:21.062 ************************************ 00:20:21.062 END TEST ftl_trim 00:20:21.062 ************************************ 00:20:21.062 00:20:21.062 real 1m14.565s 00:20:21.062 user 1m25.999s 00:20:21.062 sys 0m17.474s 00:20:21.062 23:54:09 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.062 23:54:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:21.062 23:54:09 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:21.062 23:54:09 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:21.062 23:54:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.062 23:54:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:21.062 ************************************ 00:20:21.062 START TEST ftl_restore 00:20:21.062 ************************************ 00:20:21.062 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:21.062 * Looking for test storage... 00:20:21.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:21.062 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:21.062 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:21.062 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.323 23:54:09 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:21.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.323 --rc genhtml_branch_coverage=1 00:20:21.323 --rc genhtml_function_coverage=1 00:20:21.323 --rc genhtml_legend=1 00:20:21.323 --rc geninfo_all_blocks=1 00:20:21.323 --rc geninfo_unexecuted_blocks=1 00:20:21.323 00:20:21.323 ' 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:21.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.323 --rc genhtml_branch_coverage=1 00:20:21.323 --rc genhtml_function_coverage=1 00:20:21.323 --rc genhtml_legend=1 00:20:21.323 --rc geninfo_all_blocks=1 00:20:21.323 --rc geninfo_unexecuted_blocks=1 00:20:21.323 00:20:21.323 ' 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:21.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.323 --rc genhtml_branch_coverage=1 00:20:21.323 --rc genhtml_function_coverage=1 00:20:21.323 --rc genhtml_legend=1 00:20:21.323 --rc geninfo_all_blocks=1 00:20:21.323 --rc geninfo_unexecuted_blocks=1 00:20:21.323 00:20:21.323 ' 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:21.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.323 --rc genhtml_branch_coverage=1 00:20:21.323 --rc genhtml_function_coverage=1 00:20:21.323 --rc genhtml_legend=1 00:20:21.323 --rc geninfo_all_blocks=1 00:20:21.323 --rc geninfo_unexecuted_blocks=1 00:20:21.323 00:20:21.323 ' 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Wj1TlLN41W 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88270 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88270 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88270 ']' 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.323 23:54:09 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:21.323 23:54:09 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:21.323 [2024-11-26 23:54:09.369388] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:20:21.323 [2024-11-26 23:54:09.370422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88270 ] 00:20:21.584 [2024-11-26 23:54:09.520852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.584 [2024-11-26 23:54:09.561687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.525 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.525 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:22.525 23:54:10 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:22.525 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:22.525 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:22.525 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:22.525 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:22.525 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:22.787 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:22.787 { 00:20:22.787 "name": "nvme0n1", 00:20:22.787 "aliases": [ 00:20:22.787 "912ff6df-c6d3-4d41-9dff-0200f6d0b78e" 00:20:22.787 ], 00:20:22.787 "product_name": "NVMe disk", 00:20:22.787 "block_size": 4096, 00:20:22.787 "num_blocks": 1310720, 00:20:22.787 "uuid": "912ff6df-c6d3-4d41-9dff-0200f6d0b78e", 00:20:22.787 "numa_id": -1, 00:20:22.787 "assigned_rate_limits": { 00:20:22.787 "rw_ios_per_sec": 0, 00:20:22.787 "rw_mbytes_per_sec": 0, 00:20:22.787 "r_mbytes_per_sec": 0, 00:20:22.787 "w_mbytes_per_sec": 0 00:20:22.787 }, 00:20:22.787 "claimed": true, 00:20:22.787 "claim_type": "read_many_write_one", 00:20:22.787 "zoned": false, 00:20:22.787 "supported_io_types": { 00:20:22.787 "read": true, 00:20:22.787 "write": true, 00:20:22.787 "unmap": true, 00:20:22.787 "flush": true, 00:20:22.787 "reset": true, 00:20:22.787 "nvme_admin": true, 00:20:22.787 "nvme_io": true, 00:20:22.787 "nvme_io_md": false, 00:20:22.787 "write_zeroes": true, 00:20:22.787 "zcopy": false, 00:20:22.787 "get_zone_info": false, 00:20:22.787 "zone_management": false, 00:20:22.787 "zone_append": false, 00:20:22.787 "compare": true, 00:20:22.787 "compare_and_write": false, 00:20:22.787 "abort": true, 00:20:22.787 "seek_hole": false, 00:20:22.787 "seek_data": false, 00:20:22.787 "copy": true, 00:20:22.787 "nvme_iov_md": false 00:20:22.787 }, 00:20:22.787 "driver_specific": { 00:20:22.787 "nvme": [ 00:20:22.787 { 00:20:22.787 "pci_address": "0000:00:11.0", 00:20:22.787 "trid": { 00:20:22.787 "trtype": "PCIe", 00:20:22.787 "traddr": "0000:00:11.0" 00:20:22.787 }, 00:20:22.787 "ctrlr_data": { 00:20:22.787 "cntlid": 0, 00:20:22.787 "vendor_id": "0x1b36", 00:20:22.787 "model_number": "QEMU NVMe Ctrl", 00:20:22.787 "serial_number": "12341", 00:20:22.787 "firmware_revision": "8.0.0", 00:20:22.787 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:22.787 "oacs": { 00:20:22.787 "security": 0, 00:20:22.787 "format": 1, 00:20:22.787 "firmware": 0, 00:20:22.787 "ns_manage": 1 00:20:22.787 }, 00:20:22.787 "multi_ctrlr": false, 00:20:22.787 "ana_reporting": false 00:20:22.787 }, 00:20:22.787 "vs": { 00:20:22.787 "nvme_version": "1.4" 00:20:22.787 }, 00:20:22.787 "ns_data": { 00:20:22.787 "id": 1, 00:20:22.787 "can_share": false 00:20:22.787 } 00:20:22.787 } 00:20:22.787 ], 00:20:22.787 "mp_policy": "active_passive" 00:20:22.787 } 00:20:22.787 } 00:20:22.787 ]' 00:20:22.787 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:22.787 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:22.787 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:22.787 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:22.787 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:22.787 23:54:10 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:22.787 23:54:10 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:22.787 23:54:10 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:22.787 23:54:10 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:22.787 23:54:10 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:22.787 23:54:10 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:23.048 23:54:11 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=6560b0fa-a51f-4ce5-9686-0a6d5b7b2b34 00:20:23.048 23:54:11 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:23.048 23:54:11 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6560b0fa-a51f-4ce5-9686-0a6d5b7b2b34 00:20:23.310 23:54:11 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:23.569 23:54:11 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=56b1d744-844f-428d-ae2b-cb39ccec7821 00:20:23.570 23:54:11 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 56b1d744-844f-428d-ae2b-cb39ccec7821 00:20:23.828 23:54:11 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=4c185854-0b17-49e7-9297-cc29c822873b 00:20:23.828 23:54:11 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:23.828 23:54:11 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4c185854-0b17-49e7-9297-cc29c822873b 00:20:23.828 23:54:11 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:23.828 23:54:11 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:23.828 23:54:11 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=4c185854-0b17-49e7-9297-cc29c822873b 00:20:23.828 23:54:11 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:23.828 23:54:11 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 4c185854-0b17-49e7-9297-cc29c822873b 00:20:23.828 23:54:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4c185854-0b17-49e7-9297-cc29c822873b 00:20:23.828 23:54:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:23.828 23:54:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:23.828 23:54:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:23.828 23:54:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c185854-0b17-49e7-9297-cc29c822873b 00:20:24.086 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:24.086 { 00:20:24.086 "name": "4c185854-0b17-49e7-9297-cc29c822873b", 00:20:24.086 "aliases": [ 00:20:24.086 "lvs/nvme0n1p0" 00:20:24.086 ], 00:20:24.086 "product_name": "Logical Volume", 00:20:24.086 "block_size": 4096, 00:20:24.086 "num_blocks": 26476544, 00:20:24.086 "uuid": "4c185854-0b17-49e7-9297-cc29c822873b", 00:20:24.086 "assigned_rate_limits": { 00:20:24.086 "rw_ios_per_sec": 0, 00:20:24.086 "rw_mbytes_per_sec": 0, 00:20:24.086 "r_mbytes_per_sec": 0, 00:20:24.086 "w_mbytes_per_sec": 0 00:20:24.086 }, 00:20:24.086 "claimed": false, 00:20:24.086 "zoned": false, 00:20:24.086 "supported_io_types": { 00:20:24.086 "read": true, 00:20:24.086 "write": true, 00:20:24.086 "unmap": true, 00:20:24.086 "flush": false, 00:20:24.086 "reset": true, 00:20:24.086 "nvme_admin": false, 00:20:24.086 "nvme_io": false, 00:20:24.086 "nvme_io_md": false, 00:20:24.086 "write_zeroes": true, 00:20:24.086 "zcopy": false, 00:20:24.086 "get_zone_info": false, 00:20:24.086 "zone_management": false, 00:20:24.086 "zone_append": false, 00:20:24.086 "compare": false, 00:20:24.086 "compare_and_write": false, 00:20:24.086 "abort": false, 00:20:24.086 "seek_hole": true, 00:20:24.086 "seek_data": true, 00:20:24.086 "copy": false, 00:20:24.086 "nvme_iov_md": false 00:20:24.086 }, 00:20:24.086 "driver_specific": { 00:20:24.086 "lvol": { 00:20:24.086 "lvol_store_uuid": "56b1d744-844f-428d-ae2b-cb39ccec7821", 00:20:24.086 "base_bdev": "nvme0n1", 00:20:24.086 "thin_provision": true, 00:20:24.086 "num_allocated_clusters": 0, 00:20:24.086 "snapshot": false, 00:20:24.086 "clone": false, 00:20:24.086 "esnap_clone": false 00:20:24.086 } 00:20:24.086 } 00:20:24.086 } 00:20:24.086 ]' 00:20:24.086 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:24.086 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:24.086 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:24.087 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:24.087 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:24.087 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:24.087 23:54:12 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:24.087 23:54:12 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:24.087 23:54:12 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:24.347 23:54:12 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:24.347 23:54:12 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:24.347 23:54:12 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 4c185854-0b17-49e7-9297-cc29c822873b 00:20:24.347 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4c185854-0b17-49e7-9297-cc29c822873b 00:20:24.347 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:24.347 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:24.347 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:24.347 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c185854-0b17-49e7-9297-cc29c822873b 00:20:24.609 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:24.609 { 00:20:24.609 "name": "4c185854-0b17-49e7-9297-cc29c822873b", 00:20:24.609 "aliases": [ 00:20:24.609 "lvs/nvme0n1p0" 00:20:24.609 ], 00:20:24.609 "product_name": "Logical Volume", 00:20:24.609 "block_size": 4096, 00:20:24.609 "num_blocks": 26476544, 00:20:24.609 "uuid": "4c185854-0b17-49e7-9297-cc29c822873b", 00:20:24.609 "assigned_rate_limits": { 00:20:24.609 "rw_ios_per_sec": 0, 00:20:24.609 "rw_mbytes_per_sec": 0, 00:20:24.609 "r_mbytes_per_sec": 0, 00:20:24.609 "w_mbytes_per_sec": 0 00:20:24.609 }, 00:20:24.609 "claimed": false, 00:20:24.609 "zoned": false, 00:20:24.609 "supported_io_types": { 00:20:24.609 "read": true, 00:20:24.609 "write": true, 00:20:24.609 "unmap": true, 00:20:24.609 "flush": false, 00:20:24.609 "reset": true, 00:20:24.609 "nvme_admin": false, 00:20:24.609 "nvme_io": false, 00:20:24.609 "nvme_io_md": false, 00:20:24.609 "write_zeroes": true, 00:20:24.609 "zcopy": false, 00:20:24.609 "get_zone_info": false, 00:20:24.609 "zone_management": false, 00:20:24.609 "zone_append": false, 00:20:24.609 "compare": false, 00:20:24.609 "compare_and_write": false, 00:20:24.609 "abort": false, 00:20:24.609 "seek_hole": true, 00:20:24.609 "seek_data": true, 00:20:24.609 "copy": false, 00:20:24.609 "nvme_iov_md": false 00:20:24.609 }, 00:20:24.609 "driver_specific": { 00:20:24.609 "lvol": { 00:20:24.609 "lvol_store_uuid": "56b1d744-844f-428d-ae2b-cb39ccec7821", 00:20:24.609 "base_bdev": "nvme0n1", 00:20:24.609 "thin_provision": true, 00:20:24.609 "num_allocated_clusters": 0, 00:20:24.609 "snapshot": false, 00:20:24.609 "clone": false, 00:20:24.609 "esnap_clone": false 00:20:24.609 } 00:20:24.609 } 00:20:24.609 } 00:20:24.609 ]' 00:20:24.609 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:24.609 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:24.609 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:24.609 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:24.609 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:24.609 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:24.609 23:54:12 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:24.609 23:54:12 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:24.870 23:54:12 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:24.870 23:54:12 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 4c185854-0b17-49e7-9297-cc29c822873b 00:20:24.870 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4c185854-0b17-49e7-9297-cc29c822873b 00:20:24.870 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:24.870 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:24.870 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:24.870 23:54:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c185854-0b17-49e7-9297-cc29c822873b 00:20:25.129 23:54:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:25.129 { 00:20:25.129 "name": "4c185854-0b17-49e7-9297-cc29c822873b", 00:20:25.129 "aliases": [ 00:20:25.129 "lvs/nvme0n1p0" 00:20:25.129 ], 00:20:25.129 "product_name": "Logical Volume", 00:20:25.129 "block_size": 4096, 00:20:25.129 "num_blocks": 26476544, 00:20:25.129 "uuid": "4c185854-0b17-49e7-9297-cc29c822873b", 00:20:25.129 "assigned_rate_limits": { 00:20:25.129 "rw_ios_per_sec": 0, 00:20:25.129 "rw_mbytes_per_sec": 0, 00:20:25.129 "r_mbytes_per_sec": 0, 00:20:25.129 "w_mbytes_per_sec": 0 00:20:25.129 }, 00:20:25.129 "claimed": false, 00:20:25.129 "zoned": false, 00:20:25.129 "supported_io_types": { 00:20:25.129 "read": true, 00:20:25.129 "write": true, 00:20:25.129 "unmap": true, 00:20:25.129 "flush": false, 00:20:25.129 "reset": true, 00:20:25.129 "nvme_admin": false, 00:20:25.129 "nvme_io": false, 00:20:25.129 "nvme_io_md": false, 00:20:25.129 "write_zeroes": true, 00:20:25.129 "zcopy": false, 00:20:25.129 "get_zone_info": false, 00:20:25.129 "zone_management": false, 00:20:25.129 "zone_append": false, 00:20:25.129 "compare": false, 00:20:25.129 "compare_and_write": false, 00:20:25.129 "abort": false, 00:20:25.129 "seek_hole": true, 00:20:25.129 "seek_data": true, 00:20:25.129 "copy": false, 00:20:25.129 "nvme_iov_md": false 00:20:25.129 }, 00:20:25.129 "driver_specific": { 00:20:25.129 "lvol": { 00:20:25.129 "lvol_store_uuid": "56b1d744-844f-428d-ae2b-cb39ccec7821", 00:20:25.129 "base_bdev": "nvme0n1", 00:20:25.129 "thin_provision": true, 00:20:25.129 "num_allocated_clusters": 0, 00:20:25.129 "snapshot": false, 00:20:25.129 "clone": false, 00:20:25.129 "esnap_clone": false 00:20:25.129 } 00:20:25.129 } 00:20:25.129 } 00:20:25.129 ]' 00:20:25.129 23:54:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:25.129 23:54:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:25.129 23:54:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:25.129 23:54:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:25.129 23:54:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:25.129 23:54:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:25.129 23:54:13 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:25.129 23:54:13 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4c185854-0b17-49e7-9297-cc29c822873b --l2p_dram_limit 10' 00:20:25.129 23:54:13 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:25.129 23:54:13 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:25.129 23:54:13 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:25.129 23:54:13 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:25.129 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:25.129 23:54:13 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4c185854-0b17-49e7-9297-cc29c822873b --l2p_dram_limit 10 -c nvc0n1p0 00:20:25.389 [2024-11-26 23:54:13.273881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.389 [2024-11-26 23:54:13.273934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:25.389 [2024-11-26 23:54:13.273948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:25.389 [2024-11-26 23:54:13.273959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.389 [2024-11-26 23:54:13.274023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.389 [2024-11-26 23:54:13.274037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.389 [2024-11-26 23:54:13.274045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:25.389 [2024-11-26 23:54:13.274057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.389 [2024-11-26 23:54:13.274077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:25.389 [2024-11-26 23:54:13.274364] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:25.389 [2024-11-26 23:54:13.274394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.389 [2024-11-26 23:54:13.274404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.389 [2024-11-26 23:54:13.274413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:20:25.389 [2024-11-26 23:54:13.274424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.389 [2024-11-26 23:54:13.274508] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b112cea9-c2c5-4cd0-a5f8-134f526173e0 00:20:25.389 [2024-11-26 23:54:13.275908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.389 [2024-11-26 23:54:13.275939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:25.389 [2024-11-26 23:54:13.275951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:25.389 [2024-11-26 23:54:13.275959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.389 [2024-11-26 23:54:13.283159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.389 [2024-11-26 23:54:13.283190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.389 [2024-11-26 23:54:13.283201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.144 ms 00:20:25.389 [2024-11-26 23:54:13.283209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.389 [2024-11-26 23:54:13.283289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.389 [2024-11-26 23:54:13.283298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.389 [2024-11-26 23:54:13.283308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:25.389 [2024-11-26 23:54:13.283315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.390 [2024-11-26 23:54:13.283362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.390 [2024-11-26 23:54:13.283371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:25.390 [2024-11-26 23:54:13.283387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:25.390 [2024-11-26 23:54:13.283399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.390 [2024-11-26 23:54:13.283423] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:25.390 [2024-11-26 23:54:13.285242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.390 [2024-11-26 23:54:13.285273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.390 [2024-11-26 23:54:13.285290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 00:20:25.390 [2024-11-26 23:54:13.285299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.390 [2024-11-26 23:54:13.285335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.390 [2024-11-26 23:54:13.285348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:25.390 [2024-11-26 23:54:13.285356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:25.390 [2024-11-26 23:54:13.285592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.390 [2024-11-26 23:54:13.285609] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:25.390 [2024-11-26 23:54:13.285756] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:25.390 [2024-11-26 23:54:13.285782] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:25.390 [2024-11-26 23:54:13.285816] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:25.390 [2024-11-26 23:54:13.285827] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:25.390 [2024-11-26 23:54:13.285841] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:25.390 [2024-11-26 23:54:13.285849] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:25.390 [2024-11-26 23:54:13.285860] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:25.390 [2024-11-26 23:54:13.285868] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:25.390 [2024-11-26 23:54:13.285877] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:25.390 [2024-11-26 23:54:13.285888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.390 [2024-11-26 23:54:13.285898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:25.390 [2024-11-26 23:54:13.285907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:25.390 [2024-11-26 23:54:13.285916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.390 [2024-11-26 23:54:13.286001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.390 [2024-11-26 23:54:13.286016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:25.390 [2024-11-26 23:54:13.286024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:25.390 [2024-11-26 23:54:13.286035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.390 [2024-11-26 23:54:13.286145] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:25.390 [2024-11-26 23:54:13.286157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:25.390 [2024-11-26 23:54:13.286166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:25.390 [2024-11-26 23:54:13.286197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:25.390 [2024-11-26 23:54:13.286222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.390 [2024-11-26 23:54:13.286239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:25.390 [2024-11-26 23:54:13.286248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:25.390 [2024-11-26 23:54:13.286255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.390 [2024-11-26 23:54:13.286267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:25.390 [2024-11-26 23:54:13.286275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:25.390 [2024-11-26 23:54:13.286284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:25.390 [2024-11-26 23:54:13.286301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:25.390 [2024-11-26 23:54:13.286329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:25.390 [2024-11-26 23:54:13.286355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:25.390 [2024-11-26 23:54:13.286379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:25.390 [2024-11-26 23:54:13.286404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:25.390 [2024-11-26 23:54:13.286425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.390 [2024-11-26 23:54:13.286440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:25.390 [2024-11-26 23:54:13.286447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:25.390 [2024-11-26 23:54:13.286454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.390 [2024-11-26 23:54:13.286462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:25.390 [2024-11-26 23:54:13.286469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:25.390 [2024-11-26 23:54:13.286477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:25.390 [2024-11-26 23:54:13.286492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:25.390 [2024-11-26 23:54:13.286499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286507] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:25.390 [2024-11-26 23:54:13.286520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:25.390 [2024-11-26 23:54:13.286531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.390 [2024-11-26 23:54:13.286551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:25.390 [2024-11-26 23:54:13.286558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:25.390 [2024-11-26 23:54:13.286566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:25.390 [2024-11-26 23:54:13.286574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:25.390 [2024-11-26 23:54:13.286582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:25.390 [2024-11-26 23:54:13.286589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:25.390 [2024-11-26 23:54:13.286603] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:25.390 [2024-11-26 23:54:13.286612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.390 [2024-11-26 23:54:13.286622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:25.390 [2024-11-26 23:54:13.286630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:25.390 [2024-11-26 23:54:13.286639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:25.390 [2024-11-26 23:54:13.286646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:25.390 [2024-11-26 23:54:13.286654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:25.390 [2024-11-26 23:54:13.286662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:25.390 [2024-11-26 23:54:13.286672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:25.390 [2024-11-26 23:54:13.286679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:25.390 [2024-11-26 23:54:13.286688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:25.390 [2024-11-26 23:54:13.286695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:25.390 [2024-11-26 23:54:13.286703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:25.390 [2024-11-26 23:54:13.286710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:25.390 [2024-11-26 23:54:13.286719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:25.390 [2024-11-26 23:54:13.286727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:25.391 [2024-11-26 23:54:13.286735] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:25.391 [2024-11-26 23:54:13.286744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.391 [2024-11-26 23:54:13.286753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:25.391 [2024-11-26 23:54:13.286760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:25.391 [2024-11-26 23:54:13.286769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:25.391 [2024-11-26 23:54:13.286776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:25.391 [2024-11-26 23:54:13.286785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.391 [2024-11-26 23:54:13.286804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:25.391 [2024-11-26 23:54:13.286816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:20:25.391 [2024-11-26 23:54:13.286824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.391 [2024-11-26 23:54:13.286870] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:25.391 [2024-11-26 23:54:13.286880] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:29.685 [2024-11-26 23:54:17.179629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.179746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:29.685 [2024-11-26 23:54:17.179770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3892.730 ms 00:20:29.685 [2024-11-26 23:54:17.179780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.199947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.200020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.685 [2024-11-26 23:54:17.200040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.001 ms 00:20:29.685 [2024-11-26 23:54:17.200050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.200218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.200231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.685 [2024-11-26 23:54:17.200244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:29.685 [2024-11-26 23:54:17.200252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.218089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.218154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.685 [2024-11-26 23:54:17.218170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.792 ms 00:20:29.685 [2024-11-26 23:54:17.218182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.218226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.218236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.685 [2024-11-26 23:54:17.218248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.685 [2024-11-26 23:54:17.218257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.219032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.219089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.685 [2024-11-26 23:54:17.219106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:20:29.685 [2024-11-26 23:54:17.219116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.219261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.219272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.685 [2024-11-26 23:54:17.219284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:29.685 [2024-11-26 23:54:17.219294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.231433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.231494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.685 [2024-11-26 23:54:17.231516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.111 ms 00:20:29.685 [2024-11-26 23:54:17.231526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.261962] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:29.685 [2024-11-26 23:54:17.267060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.267115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.685 [2024-11-26 23:54:17.267130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.425 ms 00:20:29.685 [2024-11-26 23:54:17.267148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.355216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.355311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:29.685 [2024-11-26 23:54:17.355335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.010 ms 00:20:29.685 [2024-11-26 23:54:17.355352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.355608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.355626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.685 [2024-11-26 23:54:17.355637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:20:29.685 [2024-11-26 23:54:17.355648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.362502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.362569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:29.685 [2024-11-26 23:54:17.362586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.811 ms 00:20:29.685 [2024-11-26 23:54:17.362599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.368000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.368067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:29.685 [2024-11-26 23:54:17.368080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.339 ms 00:20:29.685 [2024-11-26 23:54:17.368091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.368474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.368496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.685 [2024-11-26 23:54:17.368508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:20:29.685 [2024-11-26 23:54:17.368523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.416268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.416339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:29.685 [2024-11-26 23:54:17.416357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.668 ms 00:20:29.685 [2024-11-26 23:54:17.416369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.425126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.425192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:29.685 [2024-11-26 23:54:17.425206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.688 ms 00:20:29.685 [2024-11-26 23:54:17.425219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.431977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.432045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:29.685 [2024-11-26 23:54:17.432057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.704 ms 00:20:29.685 [2024-11-26 23:54:17.432067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.438439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.438502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.685 [2024-11-26 23:54:17.438513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.319 ms 00:20:29.685 [2024-11-26 23:54:17.438529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.438589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.438603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.685 [2024-11-26 23:54:17.438614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:29.685 [2024-11-26 23:54:17.438626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.438740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.685 [2024-11-26 23:54:17.438756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.685 [2024-11-26 23:54:17.438766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:29.685 [2024-11-26 23:54:17.438781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.685 [2024-11-26 23:54:17.440250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4165.708 ms, result 0 00:20:29.685 { 00:20:29.685 "name": "ftl0", 00:20:29.685 "uuid": "b112cea9-c2c5-4cd0-a5f8-134f526173e0" 00:20:29.685 } 00:20:29.685 23:54:17 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:29.685 23:54:17 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:29.685 23:54:17 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:29.685 23:54:17 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:29.948 [2024-11-26 23:54:17.879342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.879408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.948 [2024-11-26 23:54:17.879428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.948 [2024-11-26 23:54:17.879437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.879466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.948 [2024-11-26 23:54:17.880501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.880561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.948 [2024-11-26 23:54:17.880578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:20:29.948 [2024-11-26 23:54:17.880590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.880909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.880933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.948 [2024-11-26 23:54:17.880951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:29.948 [2024-11-26 23:54:17.880962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.884246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.884271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.948 [2024-11-26 23:54:17.884281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:20:29.948 [2024-11-26 23:54:17.884293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.890623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.890673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:29.948 [2024-11-26 23:54:17.890685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.310 ms 00:20:29.948 [2024-11-26 23:54:17.890700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.893902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.893970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.948 [2024-11-26 23:54:17.893981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:20:29.948 [2024-11-26 23:54:17.893991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.900717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.900786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.948 [2024-11-26 23:54:17.900815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.675 ms 00:20:29.948 [2024-11-26 23:54:17.900826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.900968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.900986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.948 [2024-11-26 23:54:17.900995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:29.948 [2024-11-26 23:54:17.901006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.904351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.904414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:29.948 [2024-11-26 23:54:17.904425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.320 ms 00:20:29.948 [2024-11-26 23:54:17.904435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.906604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.906667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:29.948 [2024-11-26 23:54:17.906677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:20:29.948 [2024-11-26 23:54:17.906687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.908452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.908514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:29.948 [2024-11-26 23:54:17.908526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:20:29.948 [2024-11-26 23:54:17.908537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.910322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.948 [2024-11-26 23:54:17.910381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:29.948 [2024-11-26 23:54:17.910392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:20:29.948 [2024-11-26 23:54:17.910401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.948 [2024-11-26 23:54:17.910447] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:29.948 [2024-11-26 23:54:17.910467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.910997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:29.948 [2024-11-26 23:54:17.911353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:29.949 [2024-11-26 23:54:17.911361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:29.949 [2024-11-26 23:54:17.911372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:29.949 [2024-11-26 23:54:17.911380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:29.949 [2024-11-26 23:54:17.911390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:29.949 [2024-11-26 23:54:17.911398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:29.949 [2024-11-26 23:54:17.911409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:29.949 [2024-11-26 23:54:17.911417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:29.949 [2024-11-26 23:54:17.911440] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:29.949 [2024-11-26 23:54:17.911451] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b112cea9-c2c5-4cd0-a5f8-134f526173e0 00:20:29.949 [2024-11-26 23:54:17.911463] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:29.949 [2024-11-26 23:54:17.911472] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:29.949 [2024-11-26 23:54:17.911485] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:29.949 [2024-11-26 23:54:17.911494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:29.949 [2024-11-26 23:54:17.911508] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:29.949 [2024-11-26 23:54:17.911516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:29.949 [2024-11-26 23:54:17.911526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:29.949 [2024-11-26 23:54:17.911533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:29.949 [2024-11-26 23:54:17.911542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:29.949 [2024-11-26 23:54:17.911550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.949 [2024-11-26 23:54:17.911560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:29.949 [2024-11-26 23:54:17.911575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:20:29.949 [2024-11-26 23:54:17.911585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.914752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.949 [2024-11-26 23:54:17.914829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.949 [2024-11-26 23:54:17.914843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:20:29.949 [2024-11-26 23:54:17.914854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.915030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.949 [2024-11-26 23:54:17.915044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.949 [2024-11-26 23:54:17.915054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:29.949 [2024-11-26 23:54:17.915064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.925999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.926057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.949 [2024-11-26 23:54:17.926072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.926084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.926159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.926171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.949 [2024-11-26 23:54:17.926184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.926196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.926279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.926297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.949 [2024-11-26 23:54:17.926306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.926319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.926338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.926350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.949 [2024-11-26 23:54:17.926358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.926369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.945899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.945962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.949 [2024-11-26 23:54:17.945978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.945989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.961161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.961226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.949 [2024-11-26 23:54:17.961238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.961250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.961348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.961368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.949 [2024-11-26 23:54:17.961377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.961389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.961443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.961456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.949 [2024-11-26 23:54:17.961465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.961475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.961563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.961576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.949 [2024-11-26 23:54:17.961591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.961602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.961645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.961658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.949 [2024-11-26 23:54:17.961666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.961677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.961731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.961912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.949 [2024-11-26 23:54:17.961923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.961935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.962002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.949 [2024-11-26 23:54:17.962016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.949 [2024-11-26 23:54:17.962026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.949 [2024-11-26 23:54:17.962038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.949 [2024-11-26 23:54:17.962218] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 82.818 ms, result 0 00:20:29.949 true 00:20:29.949 23:54:17 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88270 00:20:29.949 23:54:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88270 ']' 00:20:29.949 23:54:17 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88270 00:20:29.949 23:54:17 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:29.949 23:54:17 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.949 23:54:17 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88270 00:20:29.949 23:54:18 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.949 23:54:18 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.949 killing process with pid 88270 00:20:29.949 23:54:18 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88270' 00:20:29.949 23:54:18 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88270 00:20:29.949 23:54:18 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88270 00:20:35.238 23:54:23 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:39.441 262144+0 records in 00:20:39.441 262144+0 records out 00:20:39.441 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.51902 s, 305 MB/s 00:20:39.441 23:54:26 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:40.383 23:54:28 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:40.383 [2024-11-26 23:54:28.412964] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:20:40.383 [2024-11-26 23:54:28.413059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88484 ] 00:20:40.644 [2024-11-26 23:54:28.556539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.644 [2024-11-26 23:54:28.581768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.644 [2024-11-26 23:54:28.692636] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.644 [2024-11-26 23:54:28.692707] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.907 [2024-11-26 23:54:28.852448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.907 [2024-11-26 23:54:28.852520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.907 [2024-11-26 23:54:28.852537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:40.907 [2024-11-26 23:54:28.852552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.907 [2024-11-26 23:54:28.852618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.852630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.908 [2024-11-26 23:54:28.852639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:40.908 [2024-11-26 23:54:28.852654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.852689] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.908 [2024-11-26 23:54:28.852987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.908 [2024-11-26 23:54:28.853009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.853019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.908 [2024-11-26 23:54:28.853032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:20:40.908 [2024-11-26 23:54:28.853041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.855468] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.908 [2024-11-26 23:54:28.860179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.860236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.908 [2024-11-26 23:54:28.860249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:20:40.908 [2024-11-26 23:54:28.860268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.860349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.860368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.908 [2024-11-26 23:54:28.860378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:40.908 [2024-11-26 23:54:28.860386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.872150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.872200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.908 [2024-11-26 23:54:28.872217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.718 ms 00:20:40.908 [2024-11-26 23:54:28.872226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.872343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.872355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.908 [2024-11-26 23:54:28.872364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:40.908 [2024-11-26 23:54:28.872376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.872441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.872457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.908 [2024-11-26 23:54:28.872466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:40.908 [2024-11-26 23:54:28.872479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.872510] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.908 [2024-11-26 23:54:28.875149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.875194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.908 [2024-11-26 23:54:28.875205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:20:40.908 [2024-11-26 23:54:28.875213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.875251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.875260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.908 [2024-11-26 23:54:28.875269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:40.908 [2024-11-26 23:54:28.875281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.875308] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.908 [2024-11-26 23:54:28.875331] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.908 [2024-11-26 23:54:28.875376] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.908 [2024-11-26 23:54:28.875394] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.908 [2024-11-26 23:54:28.875504] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.908 [2024-11-26 23:54:28.875516] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.908 [2024-11-26 23:54:28.875535] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.908 [2024-11-26 23:54:28.875546] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.908 [2024-11-26 23:54:28.875556] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.908 [2024-11-26 23:54:28.875566] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:40.908 [2024-11-26 23:54:28.875574] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.908 [2024-11-26 23:54:28.875582] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.908 [2024-11-26 23:54:28.875590] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.908 [2024-11-26 23:54:28.875602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.875610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.908 [2024-11-26 23:54:28.875626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:20:40.908 [2024-11-26 23:54:28.875636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.875721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-26 23:54:28.875730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.908 [2024-11-26 23:54:28.875737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:40.908 [2024-11-26 23:54:28.875749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-26 23:54:28.875890] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.908 [2024-11-26 23:54:28.875905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.908 [2024-11-26 23:54:28.875915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.908 [2024-11-26 23:54:28.875924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.908 [2024-11-26 23:54:28.875934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.908 [2024-11-26 23:54:28.875942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.908 [2024-11-26 23:54:28.875951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:40.908 [2024-11-26 23:54:28.875960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.908 [2024-11-26 23:54:28.875969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:40.908 [2024-11-26 23:54:28.875977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.908 [2024-11-26 23:54:28.875988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.908 [2024-11-26 23:54:28.875996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:40.908 [2024-11-26 23:54:28.876005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.908 [2024-11-26 23:54:28.876012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.908 [2024-11-26 23:54:28.876021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:40.908 [2024-11-26 23:54:28.876031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.908 [2024-11-26 23:54:28.876048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:40.908 [2024-11-26 23:54:28.876056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.908 [2024-11-26 23:54:28.876072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.908 [2024-11-26 23:54:28.876087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.908 [2024-11-26 23:54:28.876095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.908 [2024-11-26 23:54:28.876111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.908 [2024-11-26 23:54:28.876123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.908 [2024-11-26 23:54:28.876139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.908 [2024-11-26 23:54:28.876147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.908 [2024-11-26 23:54:28.876163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.908 [2024-11-26 23:54:28.876170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.908 [2024-11-26 23:54:28.876182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.908 [2024-11-26 23:54:28.876189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:40.908 [2024-11-26 23:54:28.876195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.908 [2024-11-26 23:54:28.876201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.908 [2024-11-26 23:54:28.876209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:40.908 [2024-11-26 23:54:28.876215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.908 [2024-11-26 23:54:28.876228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:40.908 [2024-11-26 23:54:28.876238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.908 [2024-11-26 23:54:28.876244] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.909 [2024-11-26 23:54:28.876255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.909 [2024-11-26 23:54:28.876263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.909 [2024-11-26 23:54:28.876270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.909 [2024-11-26 23:54:28.876280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.909 [2024-11-26 23:54:28.876288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.909 [2024-11-26 23:54:28.876294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.909 [2024-11-26 23:54:28.876302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.909 [2024-11-26 23:54:28.876309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.909 [2024-11-26 23:54:28.876316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.909 [2024-11-26 23:54:28.876324] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.909 [2024-11-26 23:54:28.876341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.909 [2024-11-26 23:54:28.876353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:40.909 [2024-11-26 23:54:28.876361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:40.909 [2024-11-26 23:54:28.876369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:40.909 [2024-11-26 23:54:28.876378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:40.909 [2024-11-26 23:54:28.876385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:40.909 [2024-11-26 23:54:28.876392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:40.909 [2024-11-26 23:54:28.876400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:40.909 [2024-11-26 23:54:28.876407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:40.909 [2024-11-26 23:54:28.876414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:40.909 [2024-11-26 23:54:28.876429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:40.909 [2024-11-26 23:54:28.876436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:40.909 [2024-11-26 23:54:28.876443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:40.909 [2024-11-26 23:54:28.876450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:40.909 [2024-11-26 23:54:28.876458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:40.909 [2024-11-26 23:54:28.876465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.909 [2024-11-26 23:54:28.876474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.909 [2024-11-26 23:54:28.876487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.909 [2024-11-26 23:54:28.876495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.909 [2024-11-26 23:54:28.876502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.909 [2024-11-26 23:54:28.876512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.909 [2024-11-26 23:54:28.876520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.876528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.909 [2024-11-26 23:54:28.876536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:20:40.909 [2024-11-26 23:54:28.876546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.895898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.896133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.909 [2024-11-26 23:54:28.896155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.273 ms 00:20:40.909 [2024-11-26 23:54:28.896164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.896266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.896276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.909 [2024-11-26 23:54:28.896287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:40.909 [2024-11-26 23:54:28.896303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.916153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.916200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.909 [2024-11-26 23:54:28.916215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.781 ms 00:20:40.909 [2024-11-26 23:54:28.916232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.916278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.916290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.909 [2024-11-26 23:54:28.916301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:40.909 [2024-11-26 23:54:28.916314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.916835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.916880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.909 [2024-11-26 23:54:28.916897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:20:40.909 [2024-11-26 23:54:28.916908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.917078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.917097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.909 [2024-11-26 23:54:28.917114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:20:40.909 [2024-11-26 23:54:28.917126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.924214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.924246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.909 [2024-11-26 23:54:28.924256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.063 ms 00:20:40.909 [2024-11-26 23:54:28.924264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.927553] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:40.909 [2024-11-26 23:54:28.927589] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.909 [2024-11-26 23:54:28.927601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.927610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.909 [2024-11-26 23:54:28.927618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:20:40.909 [2024-11-26 23:54:28.927625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.942548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.942594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.909 [2024-11-26 23:54:28.942606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.878 ms 00:20:40.909 [2024-11-26 23:54:28.942615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.944706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.944742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.909 [2024-11-26 23:54:28.944752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:20:40.909 [2024-11-26 23:54:28.944759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.946698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.946732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.909 [2024-11-26 23:54:28.946741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:20:40.909 [2024-11-26 23:54:28.946748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.947087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.947099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.909 [2024-11-26 23:54:28.947113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:40.909 [2024-11-26 23:54:28.947123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.966170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.966219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.909 [2024-11-26 23:54:28.966232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.027 ms 00:20:40.909 [2024-11-26 23:54:28.966240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.974003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:40.909 [2024-11-26 23:54:28.976665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.976704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.909 [2024-11-26 23:54:28.976720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.385 ms 00:20:40.909 [2024-11-26 23:54:28.976734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.976850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.976863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.909 [2024-11-26 23:54:28.976873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:40.909 [2024-11-26 23:54:28.976888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.976971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.909 [2024-11-26 23:54:28.976982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.909 [2024-11-26 23:54:28.976991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:40.909 [2024-11-26 23:54:28.977002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.909 [2024-11-26 23:54:28.977027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.910 [2024-11-26 23:54:28.977036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.910 [2024-11-26 23:54:28.977044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:40.910 [2024-11-26 23:54:28.977052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.910 [2024-11-26 23:54:28.977087] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.910 [2024-11-26 23:54:28.977097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.910 [2024-11-26 23:54:28.977105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.910 [2024-11-26 23:54:28.977113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:40.910 [2024-11-26 23:54:28.977123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.910 [2024-11-26 23:54:28.981584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.910 [2024-11-26 23:54:28.981621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.910 [2024-11-26 23:54:28.981632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.439 ms 00:20:40.910 [2024-11-26 23:54:28.981647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.910 [2024-11-26 23:54:28.981721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.910 [2024-11-26 23:54:28.981731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.910 [2024-11-26 23:54:28.981742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:40.910 [2024-11-26 23:54:28.981750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.910 [2024-11-26 23:54:28.982882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.966 ms, result 0 00:20:42.304  [2024-11-26T23:54:31.007Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-26T23:54:32.398Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-26T23:54:33.341Z] Copying: 31/1024 [MB] (10 MBps) [2024-11-26T23:54:34.284Z] Copying: 41/1024 [MB] (10 MBps) [2024-11-26T23:54:35.224Z] Copying: 70/1024 [MB] (28 MBps) [2024-11-26T23:54:36.163Z] Copying: 88/1024 [MB] (17 MBps) [2024-11-26T23:54:37.103Z] Copying: 107/1024 [MB] (19 MBps) [2024-11-26T23:54:38.046Z] Copying: 123/1024 [MB] (16 MBps) [2024-11-26T23:54:39.455Z] Copying: 136/1024 [MB] (12 MBps) [2024-11-26T23:54:40.027Z] Copying: 156/1024 [MB] (19 MBps) [2024-11-26T23:54:41.415Z] Copying: 177/1024 [MB] (21 MBps) [2024-11-26T23:54:42.358Z] Copying: 196/1024 [MB] (18 MBps) [2024-11-26T23:54:43.302Z] Copying: 222/1024 [MB] (25 MBps) [2024-11-26T23:54:44.246Z] Copying: 240/1024 [MB] (18 MBps) [2024-11-26T23:54:45.190Z] Copying: 259/1024 [MB] (18 MBps) [2024-11-26T23:54:46.131Z] Copying: 272/1024 [MB] (12 MBps) [2024-11-26T23:54:47.074Z] Copying: 294/1024 [MB] (22 MBps) [2024-11-26T23:54:48.023Z] Copying: 309/1024 [MB] (15 MBps) [2024-11-26T23:54:49.410Z] Copying: 320/1024 [MB] (10 MBps) [2024-11-26T23:54:50.090Z] Copying: 330/1024 [MB] (10 MBps) [2024-11-26T23:54:51.048Z] Copying: 343/1024 [MB] (12 MBps) [2024-11-26T23:54:52.434Z] Copying: 356/1024 [MB] (13 MBps) [2024-11-26T23:54:53.008Z] Copying: 370/1024 [MB] (14 MBps) [2024-11-26T23:54:54.392Z] Copying: 389120/1048576 [kB] (9592 kBps) [2024-11-26T23:54:55.336Z] Copying: 399328/1048576 [kB] (10208 kBps) [2024-11-26T23:54:56.277Z] Copying: 400/1024 [MB] (10 MBps) [2024-11-26T23:54:57.221Z] Copying: 420/1024 [MB] (20 MBps) [2024-11-26T23:54:58.162Z] Copying: 432/1024 [MB] (12 MBps) [2024-11-26T23:54:59.101Z] Copying: 450/1024 [MB] (17 MBps) [2024-11-26T23:55:00.043Z] Copying: 462/1024 [MB] (12 MBps) [2024-11-26T23:55:01.427Z] Copying: 474/1024 [MB] (12 MBps) [2024-11-26T23:55:02.000Z] Copying: 491/1024 [MB] (16 MBps) [2024-11-26T23:55:03.391Z] Copying: 509/1024 [MB] (18 MBps) [2024-11-26T23:55:04.336Z] Copying: 527/1024 [MB] (17 MBps) [2024-11-26T23:55:05.282Z] Copying: 545/1024 [MB] (18 MBps) [2024-11-26T23:55:06.225Z] Copying: 564/1024 [MB] (18 MBps) [2024-11-26T23:55:07.167Z] Copying: 574/1024 [MB] (10 MBps) [2024-11-26T23:55:08.112Z] Copying: 591/1024 [MB] (16 MBps) [2024-11-26T23:55:09.058Z] Copying: 608/1024 [MB] (16 MBps) [2024-11-26T23:55:10.002Z] Copying: 633392/1048576 [kB] (10068 kBps) [2024-11-26T23:55:11.392Z] Copying: 643544/1048576 [kB] (10152 kBps) [2024-11-26T23:55:12.337Z] Copying: 638/1024 [MB] (10 MBps) [2024-11-26T23:55:13.283Z] Copying: 648/1024 [MB] (10 MBps) [2024-11-26T23:55:14.229Z] Copying: 661/1024 [MB] (12 MBps) [2024-11-26T23:55:15.175Z] Copying: 672/1024 [MB] (11 MBps) [2024-11-26T23:55:16.119Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-26T23:55:17.064Z] Copying: 694/1024 [MB] (11 MBps) [2024-11-26T23:55:18.010Z] Copying: 705/1024 [MB] (11 MBps) [2024-11-26T23:55:19.409Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-26T23:55:20.357Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-26T23:55:21.304Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-26T23:55:22.250Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-26T23:55:23.197Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-26T23:55:24.140Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-26T23:55:25.085Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-26T23:55:26.031Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-26T23:55:27.422Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-26T23:55:28.368Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-26T23:55:29.313Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-26T23:55:30.250Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-26T23:55:31.195Z] Copying: 878/1024 [MB] (46 MBps) [2024-11-26T23:55:32.139Z] Copying: 901/1024 [MB] (23 MBps) [2024-11-26T23:55:33.135Z] Copying: 923/1024 [MB] (21 MBps) [2024-11-26T23:55:34.089Z] Copying: 943/1024 [MB] (20 MBps) [2024-11-26T23:55:35.032Z] Copying: 963/1024 [MB] (20 MBps) [2024-11-26T23:55:36.418Z] Copying: 984/1024 [MB] (20 MBps) [2024-11-26T23:55:36.992Z] Copying: 1006/1024 [MB] (22 MBps) [2024-11-26T23:55:36.992Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-26 23:55:36.896866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.896899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:48.861 [2024-11-26 23:55:36.896909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:48.861 [2024-11-26 23:55:36.896919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.896954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:48.861 [2024-11-26 23:55:36.897322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.897336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:48.861 [2024-11-26 23:55:36.897345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:21:48.861 [2024-11-26 23:55:36.897351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.900112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.900253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:48.861 [2024-11-26 23:55:36.900267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:21:48.861 [2024-11-26 23:55:36.900274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.916314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.916342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:48.861 [2024-11-26 23:55:36.916350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.022 ms 00:21:48.861 [2024-11-26 23:55:36.916356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.920987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.921009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:48.861 [2024-11-26 23:55:36.921016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.607 ms 00:21:48.861 [2024-11-26 23:55:36.921023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.922963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.923068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:48.861 [2024-11-26 23:55:36.923079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.901 ms 00:21:48.861 [2024-11-26 23:55:36.923085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.927030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.927057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:48.861 [2024-11-26 23:55:36.927065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.923 ms 00:21:48.861 [2024-11-26 23:55:36.927071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.927155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.927162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:48.861 [2024-11-26 23:55:36.927169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:48.861 [2024-11-26 23:55:36.927174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.929840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.929942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:48.861 [2024-11-26 23:55:36.929953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.651 ms 00:21:48.861 [2024-11-26 23:55:36.929958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.861 [2024-11-26 23:55:36.931981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.861 [2024-11-26 23:55:36.932005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:48.861 [2024-11-26 23:55:36.932012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.999 ms 00:21:48.862 [2024-11-26 23:55:36.932018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.862 [2024-11-26 23:55:36.933659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.862 [2024-11-26 23:55:36.933751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:48.862 [2024-11-26 23:55:36.933762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:21:48.862 [2024-11-26 23:55:36.933767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.862 [2024-11-26 23:55:36.935245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.862 [2024-11-26 23:55:36.935272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:48.862 [2024-11-26 23:55:36.935280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:21:48.862 [2024-11-26 23:55:36.935286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.862 [2024-11-26 23:55:36.935309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:48.862 [2024-11-26 23:55:36.935320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:48.862 [2024-11-26 23:55:36.935821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:48.863 [2024-11-26 23:55:36.935952] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:48.863 [2024-11-26 23:55:36.935958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b112cea9-c2c5-4cd0-a5f8-134f526173e0 00:21:48.863 [2024-11-26 23:55:36.935967] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:48.863 [2024-11-26 23:55:36.935973] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:48.863 [2024-11-26 23:55:36.935978] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:48.863 [2024-11-26 23:55:36.935984] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:48.863 [2024-11-26 23:55:36.935989] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:48.863 [2024-11-26 23:55:36.935994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:48.863 [2024-11-26 23:55:36.936000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:48.863 [2024-11-26 23:55:36.936004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:48.863 [2024-11-26 23:55:36.936009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:48.863 [2024-11-26 23:55:36.936014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.863 [2024-11-26 23:55:36.936022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:48.863 [2024-11-26 23:55:36.936031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:21:48.863 [2024-11-26 23:55:36.936036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.937413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.863 [2024-11-26 23:55:36.937482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:48.863 [2024-11-26 23:55:36.937521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:21:48.863 [2024-11-26 23:55:36.937540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.937641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.863 [2024-11-26 23:55:36.937667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:48.863 [2024-11-26 23:55:36.937682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:48.863 [2024-11-26 23:55:36.937696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.941768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.941896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.863 [2024-11-26 23:55:36.942005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.942023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.942077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.942093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.863 [2024-11-26 23:55:36.942107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.942153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.942209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.942284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.863 [2024-11-26 23:55:36.942302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.942335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.942358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.942378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.863 [2024-11-26 23:55:36.942392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.942411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.950065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.950183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.863 [2024-11-26 23:55:36.950221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.950239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.956274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.956391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.863 [2024-11-26 23:55:36.956429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.956447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.956496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.956513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.863 [2024-11-26 23:55:36.956529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.956545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.956583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.956600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.863 [2024-11-26 23:55:36.956620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.956659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.956731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.956755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.863 [2024-11-26 23:55:36.956771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.957014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.957058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.957076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:48.863 [2024-11-26 23:55:36.957092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.957155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.957199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.957225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.863 [2024-11-26 23:55:36.957241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.957287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.957336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.863 [2024-11-26 23:55:36.957358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.863 [2024-11-26 23:55:36.957377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.863 [2024-11-26 23:55:36.957434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.863 [2024-11-26 23:55:36.957551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.662 ms, result 0 00:21:49.435 00:21:49.435 00:21:49.435 23:55:37 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:49.435 [2024-11-26 23:55:37.354205] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:21:49.436 [2024-11-26 23:55:37.354339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89195 ] 00:21:49.436 [2024-11-26 23:55:37.495952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.436 [2024-11-26 23:55:37.518484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.698 [2024-11-26 23:55:37.605283] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:49.698 [2024-11-26 23:55:37.605338] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:49.698 [2024-11-26 23:55:37.754360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.698 [2024-11-26 23:55:37.754394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:49.698 [2024-11-26 23:55:37.754405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:49.698 [2024-11-26 23:55:37.754411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.698 [2024-11-26 23:55:37.754449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.698 [2024-11-26 23:55:37.754456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:49.698 [2024-11-26 23:55:37.754464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:49.698 [2024-11-26 23:55:37.754474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.698 [2024-11-26 23:55:37.754491] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:49.698 [2024-11-26 23:55:37.754663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:49.698 [2024-11-26 23:55:37.754674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.698 [2024-11-26 23:55:37.754685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:49.698 [2024-11-26 23:55:37.754693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:21:49.699 [2024-11-26 23:55:37.754699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.755598] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:49.699 [2024-11-26 23:55:37.759621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.759722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:49.699 [2024-11-26 23:55:37.759771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.018 ms 00:21:49.699 [2024-11-26 23:55:37.759841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.759983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.760016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:49.699 [2024-11-26 23:55:37.760047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:49.699 [2024-11-26 23:55:37.760070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.767630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.767691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:49.699 [2024-11-26 23:55:37.767724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.405 ms 00:21:49.699 [2024-11-26 23:55:37.767744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.767966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.767993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:49.699 [2024-11-26 23:55:37.768014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:21:49.699 [2024-11-26 23:55:37.768032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.768191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.768218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:49.699 [2024-11-26 23:55:37.768241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:49.699 [2024-11-26 23:55:37.768272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.768328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:49.699 [2024-11-26 23:55:37.770564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.770591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:49.699 [2024-11-26 23:55:37.770601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.251 ms 00:21:49.699 [2024-11-26 23:55:37.770609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.770640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.770649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:49.699 [2024-11-26 23:55:37.770657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:49.699 [2024-11-26 23:55:37.770666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.770686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:49.699 [2024-11-26 23:55:37.770705] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:49.699 [2024-11-26 23:55:37.770743] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:49.699 [2024-11-26 23:55:37.770762] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:49.699 [2024-11-26 23:55:37.770883] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:49.699 [2024-11-26 23:55:37.770895] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:49.699 [2024-11-26 23:55:37.770912] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:49.699 [2024-11-26 23:55:37.770921] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:49.699 [2024-11-26 23:55:37.770930] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:49.699 [2024-11-26 23:55:37.770938] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:49.699 [2024-11-26 23:55:37.770950] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:49.699 [2024-11-26 23:55:37.770957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:49.699 [2024-11-26 23:55:37.770964] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:49.699 [2024-11-26 23:55:37.770972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.770980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:49.699 [2024-11-26 23:55:37.770988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:49.699 [2024-11-26 23:55:37.770995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.771081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.699 [2024-11-26 23:55:37.771091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:49.699 [2024-11-26 23:55:37.771098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:49.699 [2024-11-26 23:55:37.771105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.699 [2024-11-26 23:55:37.771210] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:49.699 [2024-11-26 23:55:37.771221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:49.699 [2024-11-26 23:55:37.771230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:49.699 [2024-11-26 23:55:37.771243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:49.699 [2024-11-26 23:55:37.771258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:49.699 [2024-11-26 23:55:37.771272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:49.699 [2024-11-26 23:55:37.771279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:49.699 [2024-11-26 23:55:37.771294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:49.699 [2024-11-26 23:55:37.771303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:49.699 [2024-11-26 23:55:37.771310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:49.699 [2024-11-26 23:55:37.771316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:49.699 [2024-11-26 23:55:37.771324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:49.699 [2024-11-26 23:55:37.771331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:49.699 [2024-11-26 23:55:37.771348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:49.699 [2024-11-26 23:55:37.771354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:49.699 [2024-11-26 23:55:37.771367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.699 [2024-11-26 23:55:37.771379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:49.699 [2024-11-26 23:55:37.771386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.699 [2024-11-26 23:55:37.771398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:49.699 [2024-11-26 23:55:37.771404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.699 [2024-11-26 23:55:37.771420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:49.699 [2024-11-26 23:55:37.771426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.699 [2024-11-26 23:55:37.771440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:49.699 [2024-11-26 23:55:37.771446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:49.699 [2024-11-26 23:55:37.771458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:49.699 [2024-11-26 23:55:37.771465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:49.699 [2024-11-26 23:55:37.771472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:49.699 [2024-11-26 23:55:37.771479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:49.699 [2024-11-26 23:55:37.771486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:49.699 [2024-11-26 23:55:37.771492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:49.699 [2024-11-26 23:55:37.771505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:49.699 [2024-11-26 23:55:37.771512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.699 [2024-11-26 23:55:37.771520] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:49.699 [2024-11-26 23:55:37.771530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:49.699 [2024-11-26 23:55:37.771537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:49.699 [2024-11-26 23:55:37.771544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.700 [2024-11-26 23:55:37.771551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:49.700 [2024-11-26 23:55:37.771559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:49.700 [2024-11-26 23:55:37.771566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:49.700 [2024-11-26 23:55:37.771573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:49.700 [2024-11-26 23:55:37.771579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:49.700 [2024-11-26 23:55:37.771585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:49.700 [2024-11-26 23:55:37.771594] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:49.700 [2024-11-26 23:55:37.771602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:49.700 [2024-11-26 23:55:37.771611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:49.700 [2024-11-26 23:55:37.771618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:49.700 [2024-11-26 23:55:37.771626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:49.700 [2024-11-26 23:55:37.771633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:49.700 [2024-11-26 23:55:37.771641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:49.700 [2024-11-26 23:55:37.771649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:49.700 [2024-11-26 23:55:37.771657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:49.700 [2024-11-26 23:55:37.771664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:49.700 [2024-11-26 23:55:37.771671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:49.700 [2024-11-26 23:55:37.771682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:49.700 [2024-11-26 23:55:37.771688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:49.700 [2024-11-26 23:55:37.771696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:49.700 [2024-11-26 23:55:37.771702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:49.700 [2024-11-26 23:55:37.771709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:49.700 [2024-11-26 23:55:37.771716] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:49.700 [2024-11-26 23:55:37.771723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:49.700 [2024-11-26 23:55:37.771731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:49.700 [2024-11-26 23:55:37.771738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:49.700 [2024-11-26 23:55:37.771745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:49.700 [2024-11-26 23:55:37.771752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:49.700 [2024-11-26 23:55:37.771761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.771769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:49.700 [2024-11-26 23:55:37.771776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:21:49.700 [2024-11-26 23:55:37.771787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.780216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.780250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:49.700 [2024-11-26 23:55:37.780259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.352 ms 00:21:49.700 [2024-11-26 23:55:37.780267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.780353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.780361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:49.700 [2024-11-26 23:55:37.780372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:49.700 [2024-11-26 23:55:37.780379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.796980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.797028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:49.700 [2024-11-26 23:55:37.797044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.553 ms 00:21:49.700 [2024-11-26 23:55:37.797062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.797111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.797124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.700 [2024-11-26 23:55:37.797136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:49.700 [2024-11-26 23:55:37.797146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.797543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.797581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.700 [2024-11-26 23:55:37.797595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:21:49.700 [2024-11-26 23:55:37.797606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.797786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.797834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.700 [2024-11-26 23:55:37.797846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:21:49.700 [2024-11-26 23:55:37.797870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.803620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.803860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.700 [2024-11-26 23:55:37.803890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:21:49.700 [2024-11-26 23:55:37.803901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.806757] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:49.700 [2024-11-26 23:55:37.806805] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:49.700 [2024-11-26 23:55:37.806819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.806827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:49.700 [2024-11-26 23:55:37.806835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:21:49.700 [2024-11-26 23:55:37.806842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.821569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.821606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:49.700 [2024-11-26 23:55:37.821617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.691 ms 00:21:49.700 [2024-11-26 23:55:37.821625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.823731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.823761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:49.700 [2024-11-26 23:55:37.823769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:21:49.700 [2024-11-26 23:55:37.823777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.700 [2024-11-26 23:55:37.825811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.700 [2024-11-26 23:55:37.825837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:49.700 [2024-11-26 23:55:37.825846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.992 ms 00:21:49.700 [2024-11-26 23:55:37.825864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.962 [2024-11-26 23:55:37.826179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.962 [2024-11-26 23:55:37.826198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:49.962 [2024-11-26 23:55:37.826207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:21:49.962 [2024-11-26 23:55:37.826214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.962 [2024-11-26 23:55:37.843656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.962 [2024-11-26 23:55:37.843692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:49.963 [2024-11-26 23:55:37.843702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.422 ms 00:21:49.963 [2024-11-26 23:55:37.843710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.963 [2024-11-26 23:55:37.851020] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:49.963 [2024-11-26 23:55:37.853401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.963 [2024-11-26 23:55:37.853556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:49.963 [2024-11-26 23:55:37.853571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.655 ms 00:21:49.963 [2024-11-26 23:55:37.853578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.963 [2024-11-26 23:55:37.853633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.963 [2024-11-26 23:55:37.853643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:49.963 [2024-11-26 23:55:37.853657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:49.963 [2024-11-26 23:55:37.853665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.963 [2024-11-26 23:55:37.853744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.963 [2024-11-26 23:55:37.853757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:49.963 [2024-11-26 23:55:37.853766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:49.963 [2024-11-26 23:55:37.853773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.963 [2024-11-26 23:55:37.853809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.963 [2024-11-26 23:55:37.853819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:49.963 [2024-11-26 23:55:37.853827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:49.963 [2024-11-26 23:55:37.853834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.963 [2024-11-26 23:55:37.853879] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:49.963 [2024-11-26 23:55:37.853893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.963 [2024-11-26 23:55:37.853904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:49.963 [2024-11-26 23:55:37.853914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:49.963 [2024-11-26 23:55:37.853921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.963 [2024-11-26 23:55:37.857578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.963 [2024-11-26 23:55:37.857611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:49.963 [2024-11-26 23:55:37.857620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:21:49.963 [2024-11-26 23:55:37.857629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.963 [2024-11-26 23:55:37.857694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.963 [2024-11-26 23:55:37.857703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:49.963 [2024-11-26 23:55:37.857712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:49.963 [2024-11-26 23:55:37.857721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.963 [2024-11-26 23:55:37.858601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.850 ms, result 0 00:21:50.908  [2024-11-26T23:55:40.431Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-26T23:55:41.377Z] Copying: 35/1024 [MB] (18 MBps) [2024-11-26T23:55:42.318Z] Copying: 52/1024 [MB] (16 MBps) [2024-11-26T23:55:43.259Z] Copying: 71/1024 [MB] (19 MBps) [2024-11-26T23:55:44.198Z] Copying: 89/1024 [MB] (17 MBps) [2024-11-26T23:55:45.141Z] Copying: 103/1024 [MB] (14 MBps) [2024-11-26T23:55:46.086Z] Copying: 126/1024 [MB] (23 MBps) [2024-11-26T23:55:47.475Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-26T23:55:48.048Z] Copying: 147/1024 [MB] (10 MBps) [2024-11-26T23:55:49.436Z] Copying: 157/1024 [MB] (10 MBps) [2024-11-26T23:55:50.382Z] Copying: 168/1024 [MB] (10 MBps) [2024-11-26T23:55:51.328Z] Copying: 178/1024 [MB] (10 MBps) [2024-11-26T23:55:52.270Z] Copying: 188/1024 [MB] (10 MBps) [2024-11-26T23:55:53.215Z] Copying: 200/1024 [MB] (11 MBps) [2024-11-26T23:55:54.161Z] Copying: 230/1024 [MB] (29 MBps) [2024-11-26T23:55:55.107Z] Copying: 249/1024 [MB] (19 MBps) [2024-11-26T23:55:56.053Z] Copying: 260/1024 [MB] (10 MBps) [2024-11-26T23:55:57.443Z] Copying: 270/1024 [MB] (10 MBps) [2024-11-26T23:55:58.387Z] Copying: 281/1024 [MB] (10 MBps) [2024-11-26T23:55:59.335Z] Copying: 295/1024 [MB] (14 MBps) [2024-11-26T23:56:00.352Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-26T23:56:01.299Z] Copying: 317/1024 [MB] (10 MBps) [2024-11-26T23:56:02.243Z] Copying: 335/1024 [MB] (18 MBps) [2024-11-26T23:56:03.185Z] Copying: 348/1024 [MB] (13 MBps) [2024-11-26T23:56:04.130Z] Copying: 371/1024 [MB] (23 MBps) [2024-11-26T23:56:05.074Z] Copying: 393/1024 [MB] (21 MBps) [2024-11-26T23:56:06.461Z] Copying: 415/1024 [MB] (21 MBps) [2024-11-26T23:56:07.406Z] Copying: 437/1024 [MB] (22 MBps) [2024-11-26T23:56:08.349Z] Copying: 460/1024 [MB] (23 MBps) [2024-11-26T23:56:09.294Z] Copying: 482/1024 [MB] (22 MBps) [2024-11-26T23:56:10.237Z] Copying: 505/1024 [MB] (22 MBps) [2024-11-26T23:56:11.183Z] Copying: 523/1024 [MB] (18 MBps) [2024-11-26T23:56:12.129Z] Copying: 540/1024 [MB] (16 MBps) [2024-11-26T23:56:13.074Z] Copying: 560/1024 [MB] (20 MBps) [2024-11-26T23:56:14.463Z] Copying: 578/1024 [MB] (17 MBps) [2024-11-26T23:56:15.403Z] Copying: 591/1024 [MB] (12 MBps) [2024-11-26T23:56:16.345Z] Copying: 606/1024 [MB] (14 MBps) [2024-11-26T23:56:17.290Z] Copying: 625/1024 [MB] (19 MBps) [2024-11-26T23:56:18.235Z] Copying: 642/1024 [MB] (17 MBps) [2024-11-26T23:56:19.181Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-26T23:56:20.123Z] Copying: 675/1024 [MB] (22 MBps) [2024-11-26T23:56:21.066Z] Copying: 686/1024 [MB] (10 MBps) [2024-11-26T23:56:22.454Z] Copying: 699/1024 [MB] (12 MBps) [2024-11-26T23:56:23.396Z] Copying: 714/1024 [MB] (15 MBps) [2024-11-26T23:56:24.342Z] Copying: 734/1024 [MB] (20 MBps) [2024-11-26T23:56:25.288Z] Copying: 749/1024 [MB] (14 MBps) [2024-11-26T23:56:26.234Z] Copying: 771/1024 [MB] (22 MBps) [2024-11-26T23:56:27.179Z] Copying: 793/1024 [MB] (21 MBps) [2024-11-26T23:56:28.125Z] Copying: 806/1024 [MB] (13 MBps) [2024-11-26T23:56:29.071Z] Copying: 827/1024 [MB] (21 MBps) [2024-11-26T23:56:30.466Z] Copying: 850/1024 [MB] (22 MBps) [2024-11-26T23:56:31.038Z] Copying: 870/1024 [MB] (20 MBps) [2024-11-26T23:56:32.422Z] Copying: 893/1024 [MB] (22 MBps) [2024-11-26T23:56:33.369Z] Copying: 917/1024 [MB] (24 MBps) [2024-11-26T23:56:34.386Z] Copying: 938/1024 [MB] (20 MBps) [2024-11-26T23:56:35.330Z] Copying: 959/1024 [MB] (21 MBps) [2024-11-26T23:56:36.274Z] Copying: 979/1024 [MB] (19 MBps) [2024-11-26T23:56:37.214Z] Copying: 998/1024 [MB] (19 MBps) [2024-11-26T23:56:38.155Z] Copying: 1009/1024 [MB] (11 MBps) [2024-11-26T23:56:38.416Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-26T23:56:38.991Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-26 23:56:38.760047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.760157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:50.860 [2024-11-26 23:56:38.760184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:50.860 [2024-11-26 23:56:38.760195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.760226] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:50.860 [2024-11-26 23:56:38.761227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.761260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:50.860 [2024-11-26 23:56:38.761274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:22:50.860 [2024-11-26 23:56:38.761302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.761577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.761588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:50.860 [2024-11-26 23:56:38.761599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:22:50.860 [2024-11-26 23:56:38.761615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.765109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.765128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:50.860 [2024-11-26 23:56:38.765139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:22:50.860 [2024-11-26 23:56:38.765148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.772270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.772306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:50.860 [2024-11-26 23:56:38.772317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.091 ms 00:22:50.860 [2024-11-26 23:56:38.772334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.775014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.775058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:50.860 [2024-11-26 23:56:38.775070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:22:50.860 [2024-11-26 23:56:38.775078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.780250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.780290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:50.860 [2024-11-26 23:56:38.780303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.124 ms 00:22:50.860 [2024-11-26 23:56:38.780313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.780452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.780464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:50.860 [2024-11-26 23:56:38.780474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:50.860 [2024-11-26 23:56:38.780491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.783935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.783972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:50.860 [2024-11-26 23:56:38.783983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:22:50.860 [2024-11-26 23:56:38.783992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.786875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.787059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:50.860 [2024-11-26 23:56:38.787078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:22:50.860 [2024-11-26 23:56:38.787086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.790060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.790273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:50.860 [2024-11-26 23:56:38.790296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:22:50.860 [2024-11-26 23:56:38.790305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.792659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.860 [2024-11-26 23:56:38.792700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:50.860 [2024-11-26 23:56:38.792711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:22:50.860 [2024-11-26 23:56:38.792720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.860 [2024-11-26 23:56:38.792766] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:50.860 [2024-11-26 23:56:38.792786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:50.860 [2024-11-26 23:56:38.792979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.792987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.792995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:50.861 [2024-11-26 23:56:38.793570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:50.862 [2024-11-26 23:56:38.793680] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:50.862 [2024-11-26 23:56:38.793688] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b112cea9-c2c5-4cd0-a5f8-134f526173e0 00:22:50.862 [2024-11-26 23:56:38.793697] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:50.862 [2024-11-26 23:56:38.793705] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:50.862 [2024-11-26 23:56:38.793718] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:50.862 [2024-11-26 23:56:38.793726] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:50.862 [2024-11-26 23:56:38.793741] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:50.862 [2024-11-26 23:56:38.793755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:50.862 [2024-11-26 23:56:38.793766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:50.862 [2024-11-26 23:56:38.793773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:50.862 [2024-11-26 23:56:38.793781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:50.862 [2024-11-26 23:56:38.794206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.862 [2024-11-26 23:56:38.794255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:50.862 [2024-11-26 23:56:38.794278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:22:50.862 [2024-11-26 23:56:38.794299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.797522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.862 [2024-11-26 23:56:38.797663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:50.862 [2024-11-26 23:56:38.797719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:22:50.862 [2024-11-26 23:56:38.797743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.797975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.862 [2024-11-26 23:56:38.798105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:50.862 [2024-11-26 23:56:38.798265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:22:50.862 [2024-11-26 23:56:38.798289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.808427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.808580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.862 [2024-11-26 23:56:38.808636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.808660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.808751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.808774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.862 [2024-11-26 23:56:38.808811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.808838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.808927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.808987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.862 [2024-11-26 23:56:38.808999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.809008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.809032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.809046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.862 [2024-11-26 23:56:38.809055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.809063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.829447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.829644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.862 [2024-11-26 23:56:38.829701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.829736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.845619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.845805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.862 [2024-11-26 23:56:38.845865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.845890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.846001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.846026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.862 [2024-11-26 23:56:38.846048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.846068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.846128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.846159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.862 [2024-11-26 23:56:38.846182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.846241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.846443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.846567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.862 [2024-11-26 23:56:38.846623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.846639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.846690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.846701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:50.862 [2024-11-26 23:56:38.846715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.846724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.846785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.846822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.862 [2024-11-26 23:56:38.846832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.846842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.846909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.862 [2024-11-26 23:56:38.846923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.862 [2024-11-26 23:56:38.846933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.862 [2024-11-26 23:56:38.846952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.862 [2024-11-26 23:56:38.847125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.035 ms, result 0 00:22:51.124 00:22:51.124 00:22:51.124 23:56:39 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:53.682 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:53.682 23:56:41 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:53.682 [2024-11-26 23:56:41.535516] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:22:53.682 [2024-11-26 23:56:41.535865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89857 ] 00:22:53.682 [2024-11-26 23:56:41.681505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.682 [2024-11-26 23:56:41.721594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.945 [2024-11-26 23:56:41.872076] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.945 [2024-11-26 23:56:41.872181] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.945 [2024-11-26 23:56:42.035493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.035556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:53.945 [2024-11-26 23:56:42.035578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:53.945 [2024-11-26 23:56:42.035587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.035649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.035661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.945 [2024-11-26 23:56:42.035670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:53.945 [2024-11-26 23:56:42.035692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.035721] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:53.945 [2024-11-26 23:56:42.036030] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:53.945 [2024-11-26 23:56:42.036052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.036061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.945 [2024-11-26 23:56:42.036074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:22:53.945 [2024-11-26 23:56:42.036082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.038356] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:53.945 [2024-11-26 23:56:42.043366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.043419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:53.945 [2024-11-26 23:56:42.043431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:22:53.945 [2024-11-26 23:56:42.043448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.043527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.043546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:53.945 [2024-11-26 23:56:42.043555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:53.945 [2024-11-26 23:56:42.043563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.055064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.055110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.945 [2024-11-26 23:56:42.055126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.449 ms 00:22:53.945 [2024-11-26 23:56:42.055134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.055242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.055253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.945 [2024-11-26 23:56:42.055262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:53.945 [2024-11-26 23:56:42.055273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.055338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.055348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:53.945 [2024-11-26 23:56:42.055357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:53.945 [2024-11-26 23:56:42.055368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.055394] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.945 [2024-11-26 23:56:42.058105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.058145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.945 [2024-11-26 23:56:42.058156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:22:53.945 [2024-11-26 23:56:42.058170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.058211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.058220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:53.945 [2024-11-26 23:56:42.058230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:53.945 [2024-11-26 23:56:42.058243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.058274] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:53.945 [2024-11-26 23:56:42.058299] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:53.945 [2024-11-26 23:56:42.058351] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:53.945 [2024-11-26 23:56:42.058374] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:53.945 [2024-11-26 23:56:42.058487] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:53.945 [2024-11-26 23:56:42.058500] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:53.945 [2024-11-26 23:56:42.058517] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:53.945 [2024-11-26 23:56:42.058528] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:53.945 [2024-11-26 23:56:42.058538] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:53.945 [2024-11-26 23:56:42.058546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:53.945 [2024-11-26 23:56:42.058553] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:53.945 [2024-11-26 23:56:42.058561] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:53.945 [2024-11-26 23:56:42.058568] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:53.945 [2024-11-26 23:56:42.058577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.058584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:53.945 [2024-11-26 23:56:42.058592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:22:53.945 [2024-11-26 23:56:42.058602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.058688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.945 [2024-11-26 23:56:42.058697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:53.945 [2024-11-26 23:56:42.058709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:53.945 [2024-11-26 23:56:42.058717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.945 [2024-11-26 23:56:42.058841] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:53.945 [2024-11-26 23:56:42.058853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:53.945 [2024-11-26 23:56:42.058862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.945 [2024-11-26 23:56:42.058869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.945 [2024-11-26 23:56:42.058878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:53.945 [2024-11-26 23:56:42.058885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:53.945 [2024-11-26 23:56:42.058892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:53.945 [2024-11-26 23:56:42.058900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:53.945 [2024-11-26 23:56:42.058908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:53.945 [2024-11-26 23:56:42.058915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.945 [2024-11-26 23:56:42.058929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:53.945 [2024-11-26 23:56:42.058937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:53.945 [2024-11-26 23:56:42.058944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.945 [2024-11-26 23:56:42.058950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:53.945 [2024-11-26 23:56:42.058957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:53.945 [2024-11-26 23:56:42.058964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.946 [2024-11-26 23:56:42.058972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:53.946 [2024-11-26 23:56:42.058979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:53.946 [2024-11-26 23:56:42.058986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.946 [2024-11-26 23:56:42.058993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:53.946 [2024-11-26 23:56:42.058999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:53.946 [2024-11-26 23:56:42.059008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.946 [2024-11-26 23:56:42.059016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:53.946 [2024-11-26 23:56:42.059023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:53.946 [2024-11-26 23:56:42.059030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.946 [2024-11-26 23:56:42.059037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:53.946 [2024-11-26 23:56:42.059048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:53.946 [2024-11-26 23:56:42.059055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.946 [2024-11-26 23:56:42.059062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:53.946 [2024-11-26 23:56:42.059069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:53.946 [2024-11-26 23:56:42.059076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.946 [2024-11-26 23:56:42.059083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:53.946 [2024-11-26 23:56:42.059089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:53.946 [2024-11-26 23:56:42.059096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.946 [2024-11-26 23:56:42.059102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:53.946 [2024-11-26 23:56:42.059109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:53.946 [2024-11-26 23:56:42.059116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.946 [2024-11-26 23:56:42.059122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:53.946 [2024-11-26 23:56:42.059129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:53.946 [2024-11-26 23:56:42.059136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.946 [2024-11-26 23:56:42.059143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:53.946 [2024-11-26 23:56:42.059150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:53.946 [2024-11-26 23:56:42.059160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.946 [2024-11-26 23:56:42.059167] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:53.946 [2024-11-26 23:56:42.059178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:53.946 [2024-11-26 23:56:42.059185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.946 [2024-11-26 23:56:42.059193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.946 [2024-11-26 23:56:42.059202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:53.946 [2024-11-26 23:56:42.059210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:53.946 [2024-11-26 23:56:42.059216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:53.946 [2024-11-26 23:56:42.059223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:53.946 [2024-11-26 23:56:42.059230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:53.946 [2024-11-26 23:56:42.059236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:53.946 [2024-11-26 23:56:42.059247] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:53.946 [2024-11-26 23:56:42.059257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.946 [2024-11-26 23:56:42.059269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:53.946 [2024-11-26 23:56:42.059278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:53.946 [2024-11-26 23:56:42.059286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:53.946 [2024-11-26 23:56:42.059295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:53.946 [2024-11-26 23:56:42.059303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:53.946 [2024-11-26 23:56:42.059310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:53.946 [2024-11-26 23:56:42.059318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:53.946 [2024-11-26 23:56:42.059325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:53.946 [2024-11-26 23:56:42.059332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:53.946 [2024-11-26 23:56:42.059346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:53.946 [2024-11-26 23:56:42.059353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:53.946 [2024-11-26 23:56:42.059360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:53.946 [2024-11-26 23:56:42.059367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:53.946 [2024-11-26 23:56:42.059374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:53.946 [2024-11-26 23:56:42.059380] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:53.946 [2024-11-26 23:56:42.059389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.946 [2024-11-26 23:56:42.059398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:53.946 [2024-11-26 23:56:42.059405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:53.946 [2024-11-26 23:56:42.059412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:53.946 [2024-11-26 23:56:42.059421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:53.946 [2024-11-26 23:56:42.059429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.946 [2024-11-26 23:56:42.059437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:53.946 [2024-11-26 23:56:42.059444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:22:53.946 [2024-11-26 23:56:42.059454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.208 [2024-11-26 23:56:42.079423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.208 [2024-11-26 23:56:42.079476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.208 [2024-11-26 23:56:42.079489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.902 ms 00:22:54.208 [2024-11-26 23:56:42.079504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.208 [2024-11-26 23:56:42.079602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.208 [2024-11-26 23:56:42.079611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:54.208 [2024-11-26 23:56:42.079621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:54.208 [2024-11-26 23:56:42.079630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.208 [2024-11-26 23:56:42.105009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.208 [2024-11-26 23:56:42.105074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.208 [2024-11-26 23:56:42.105091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.311 ms 00:22:54.208 [2024-11-26 23:56:42.105103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.208 [2024-11-26 23:56:42.105164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.208 [2024-11-26 23:56:42.105178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.208 [2024-11-26 23:56:42.105199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:54.208 [2024-11-26 23:56:42.105214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.208 [2024-11-26 23:56:42.106029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.208 [2024-11-26 23:56:42.106084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.208 [2024-11-26 23:56:42.106100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:22:54.208 [2024-11-26 23:56:42.106111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.208 [2024-11-26 23:56:42.106313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.208 [2024-11-26 23:56:42.106330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.208 [2024-11-26 23:56:42.106341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:22:54.208 [2024-11-26 23:56:42.106351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.208 [2024-11-26 23:56:42.117660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.208 [2024-11-26 23:56:42.117708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.208 [2024-11-26 23:56:42.117719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.280 ms 00:22:54.208 [2024-11-26 23:56:42.117736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.208 [2024-11-26 23:56:42.122466] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:54.209 [2024-11-26 23:56:42.122516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:54.209 [2024-11-26 23:56:42.122533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.122543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:54.209 [2024-11-26 23:56:42.122552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.657 ms 00:22:54.209 [2024-11-26 23:56:42.122559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.138586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.138631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:54.209 [2024-11-26 23:56:42.138645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.965 ms 00:22:54.209 [2024-11-26 23:56:42.138654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.141748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.141812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:54.209 [2024-11-26 23:56:42.141823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:22:54.209 [2024-11-26 23:56:42.141831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.144383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.144558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:54.209 [2024-11-26 23:56:42.144577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 00:22:54.209 [2024-11-26 23:56:42.144584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.144980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.144998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:54.209 [2024-11-26 23:56:42.145009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:22:54.209 [2024-11-26 23:56:42.145022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.173302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.173383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:54.209 [2024-11-26 23:56:42.173398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.255 ms 00:22:54.209 [2024-11-26 23:56:42.173408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.182471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:54.209 [2024-11-26 23:56:42.186205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.186389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:54.209 [2024-11-26 23:56:42.186410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.731 ms 00:22:54.209 [2024-11-26 23:56:42.186426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.186528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.186541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:54.209 [2024-11-26 23:56:42.186562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:54.209 [2024-11-26 23:56:42.186572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.186660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.186674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:54.209 [2024-11-26 23:56:42.186684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:54.209 [2024-11-26 23:56:42.186692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.186721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.186736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:54.209 [2024-11-26 23:56:42.186750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:54.209 [2024-11-26 23:56:42.186758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.186847] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:54.209 [2024-11-26 23:56:42.186860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.186869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:54.209 [2024-11-26 23:56:42.186885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:54.209 [2024-11-26 23:56:42.186894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.193180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.193349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:54.209 [2024-11-26 23:56:42.193369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.260 ms 00:22:54.209 [2024-11-26 23:56:42.193379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.193466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.209 [2024-11-26 23:56:42.193477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:54.209 [2024-11-26 23:56:42.193490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:54.209 [2024-11-26 23:56:42.193502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.209 [2024-11-26 23:56:42.194959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.860 ms, result 0 00:22:55.155  [2024-11-26T23:56:44.232Z] Copying: 10092/1048576 [kB] (10092 kBps) [2024-11-26T23:56:45.620Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-26T23:56:46.564Z] Copying: 30/1024 [MB] (10 MBps) [2024-11-26T23:56:47.509Z] Copying: 41/1024 [MB] (10 MBps) [2024-11-26T23:56:48.453Z] Copying: 51/1024 [MB] (10 MBps) [2024-11-26T23:56:49.405Z] Copying: 62552/1048576 [kB] (9952 kBps) [2024-11-26T23:56:50.341Z] Copying: 72768/1048576 [kB] (10216 kBps) [2024-11-26T23:56:51.274Z] Copying: 108/1024 [MB] (37 MBps) [2024-11-26T23:56:52.212Z] Copying: 163/1024 [MB] (54 MBps) [2024-11-26T23:56:53.600Z] Copying: 193/1024 [MB] (29 MBps) [2024-11-26T23:56:54.546Z] Copying: 206/1024 [MB] (13 MBps) [2024-11-26T23:56:55.492Z] Copying: 218/1024 [MB] (12 MBps) [2024-11-26T23:56:56.436Z] Copying: 232/1024 [MB] (14 MBps) [2024-11-26T23:56:57.382Z] Copying: 248/1024 [MB] (16 MBps) [2024-11-26T23:56:58.315Z] Copying: 263/1024 [MB] (15 MBps) [2024-11-26T23:56:59.257Z] Copying: 292/1024 [MB] (28 MBps) [2024-11-26T23:57:00.645Z] Copying: 306/1024 [MB] (14 MBps) [2024-11-26T23:57:01.218Z] Copying: 319/1024 [MB] (12 MBps) [2024-11-26T23:57:02.605Z] Copying: 339/1024 [MB] (20 MBps) [2024-11-26T23:57:03.548Z] Copying: 350/1024 [MB] (11 MBps) [2024-11-26T23:57:04.491Z] Copying: 366/1024 [MB] (16 MBps) [2024-11-26T23:57:05.526Z] Copying: 400/1024 [MB] (33 MBps) [2024-11-26T23:57:06.465Z] Copying: 439/1024 [MB] (39 MBps) [2024-11-26T23:57:07.405Z] Copying: 459/1024 [MB] (20 MBps) [2024-11-26T23:57:08.346Z] Copying: 485/1024 [MB] (25 MBps) [2024-11-26T23:57:09.290Z] Copying: 505/1024 [MB] (20 MBps) [2024-11-26T23:57:10.230Z] Copying: 526/1024 [MB] (20 MBps) [2024-11-26T23:57:11.612Z] Copying: 553/1024 [MB] (26 MBps) [2024-11-26T23:57:12.553Z] Copying: 574/1024 [MB] (21 MBps) [2024-11-26T23:57:13.495Z] Copying: 591/1024 [MB] (17 MBps) [2024-11-26T23:57:14.438Z] Copying: 607/1024 [MB] (16 MBps) [2024-11-26T23:57:15.373Z] Copying: 618/1024 [MB] (11 MBps) [2024-11-26T23:57:16.307Z] Copying: 656/1024 [MB] (37 MBps) [2024-11-26T23:57:17.242Z] Copying: 689/1024 [MB] (32 MBps) [2024-11-26T23:57:18.618Z] Copying: 721/1024 [MB] (31 MBps) [2024-11-26T23:57:19.553Z] Copying: 773/1024 [MB] (52 MBps) [2024-11-26T23:57:20.487Z] Copying: 805/1024 [MB] (31 MBps) [2024-11-26T23:57:21.424Z] Copying: 836/1024 [MB] (31 MBps) [2024-11-26T23:57:22.367Z] Copying: 865/1024 [MB] (29 MBps) [2024-11-26T23:57:23.301Z] Copying: 883/1024 [MB] (18 MBps) [2024-11-26T23:57:24.232Z] Copying: 914/1024 [MB] (30 MBps) [2024-11-26T23:57:25.605Z] Copying: 948/1024 [MB] (34 MBps) [2024-11-26T23:57:26.546Z] Copying: 994/1024 [MB] (45 MBps) [2024-11-26T23:57:27.120Z] Copying: 1023/1024 [MB] (28 MBps) [2024-11-26T23:57:27.120Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-26 23:57:26.868518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.989 [2024-11-26 23:57:26.868626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:38.989 [2024-11-26 23:57:26.868647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.989 [2024-11-26 23:57:26.868659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.989 [2024-11-26 23:57:26.869838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:38.989 [2024-11-26 23:57:26.872213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.989 [2024-11-26 23:57:26.872276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:38.989 [2024-11-26 23:57:26.872301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.334 ms 00:23:38.989 [2024-11-26 23:57:26.872313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.989 [2024-11-26 23:57:26.887087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.989 [2024-11-26 23:57:26.887290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:38.989 [2024-11-26 23:57:26.887315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.319 ms 00:23:38.989 [2024-11-26 23:57:26.887326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.989 [2024-11-26 23:57:26.912019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.989 [2024-11-26 23:57:26.912067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:38.989 [2024-11-26 23:57:26.912080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.666 ms 00:23:38.989 [2024-11-26 23:57:26.912100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.989 [2024-11-26 23:57:26.918315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.989 [2024-11-26 23:57:26.918493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:38.989 [2024-11-26 23:57:26.918513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:23:38.989 [2024-11-26 23:57:26.918524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.989 [2024-11-26 23:57:26.921566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.989 [2024-11-26 23:57:26.921723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:38.989 [2024-11-26 23:57:26.921741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:23:38.989 [2024-11-26 23:57:26.921751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.989 [2024-11-26 23:57:26.927187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.989 [2024-11-26 23:57:26.927260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:38.989 [2024-11-26 23:57:26.927282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.306 ms 00:23:38.989 [2024-11-26 23:57:26.927296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.252 [2024-11-26 23:57:27.173300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.252 [2024-11-26 23:57:27.173349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:39.252 [2024-11-26 23:57:27.173363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 245.954 ms 00:23:39.252 [2024-11-26 23:57:27.173372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.252 [2024-11-26 23:57:27.176162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.252 [2024-11-26 23:57:27.176210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:39.252 [2024-11-26 23:57:27.176220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.772 ms 00:23:39.252 [2024-11-26 23:57:27.176229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.252 [2024-11-26 23:57:27.178287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.252 [2024-11-26 23:57:27.178334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:39.252 [2024-11-26 23:57:27.178345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:23:39.252 [2024-11-26 23:57:27.178354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.252 [2024-11-26 23:57:27.180339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.252 [2024-11-26 23:57:27.180386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:39.253 [2024-11-26 23:57:27.180396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:23:39.253 [2024-11-26 23:57:27.180404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.253 [2024-11-26 23:57:27.182558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.253 [2024-11-26 23:57:27.182727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:39.253 [2024-11-26 23:57:27.182744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:23:39.253 [2024-11-26 23:57:27.182752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.253 [2024-11-26 23:57:27.182847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:39.253 [2024-11-26 23:57:27.182865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101120 / 261120 wr_cnt: 1 state: open 00:23:39.253 [2024-11-26 23:57:27.182877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.182997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:39.253 [2024-11-26 23:57:27.183502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:39.254 [2024-11-26 23:57:27.183698] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:39.254 [2024-11-26 23:57:27.183707] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b112cea9-c2c5-4cd0-a5f8-134f526173e0 00:23:39.254 [2024-11-26 23:57:27.183717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101120 00:23:39.254 [2024-11-26 23:57:27.183730] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102080 00:23:39.254 [2024-11-26 23:57:27.183744] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101120 00:23:39.254 [2024-11-26 23:57:27.183753] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:23:39.254 [2024-11-26 23:57:27.183762] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:39.254 [2024-11-26 23:57:27.183775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:39.254 [2024-11-26 23:57:27.183783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:39.254 [2024-11-26 23:57:27.183811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:39.254 [2024-11-26 23:57:27.183820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:39.254 [2024-11-26 23:57:27.183828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.254 [2024-11-26 23:57:27.183838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:39.254 [2024-11-26 23:57:27.183848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:23:39.254 [2024-11-26 23:57:27.183856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.186945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.254 [2024-11-26 23:57:27.186978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:39.254 [2024-11-26 23:57:27.186995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.071 ms 00:23:39.254 [2024-11-26 23:57:27.187005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.187155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.254 [2024-11-26 23:57:27.187165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:39.254 [2024-11-26 23:57:27.187175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:39.254 [2024-11-26 23:57:27.187187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.197170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.197221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:39.254 [2024-11-26 23:57:27.197232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.197242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.197307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.197317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:39.254 [2024-11-26 23:57:27.197326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.197339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.197407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.197418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:39.254 [2024-11-26 23:57:27.197428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.197436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.197453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.197467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:39.254 [2024-11-26 23:57:27.197475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.197484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.216908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.216973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:39.254 [2024-11-26 23:57:27.216986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.216996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.232332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.232561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:39.254 [2024-11-26 23:57:27.232581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.232593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.232689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.232701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:39.254 [2024-11-26 23:57:27.232710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.232719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.232762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.232772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:39.254 [2024-11-26 23:57:27.232782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.232813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.232914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.232929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:39.254 [2024-11-26 23:57:27.232939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.232948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.232982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.232993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:39.254 [2024-11-26 23:57:27.233002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.233011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.233061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.233077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:39.254 [2024-11-26 23:57:27.233087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.233097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.233154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.254 [2024-11-26 23:57:27.233167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:39.254 [2024-11-26 23:57:27.233177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.254 [2024-11-26 23:57:27.233196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.254 [2024-11-26 23:57:27.233368] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.545 ms, result 0 00:23:39.515 00:23:39.515 00:23:39.515 23:57:27 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:39.515 [2024-11-26 23:57:27.593208] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:23:39.515 [2024-11-26 23:57:27.593387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90333 ] 00:23:39.777 [2024-11-26 23:57:27.743665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.777 [2024-11-26 23:57:27.783877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.040 [2024-11-26 23:57:27.933618] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.040 [2024-11-26 23:57:27.934032] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.041 [2024-11-26 23:57:28.097385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.097594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.041 [2024-11-26 23:57:28.097620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:40.041 [2024-11-26 23:57:28.097630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.097715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.097727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.041 [2024-11-26 23:57:28.097738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:40.041 [2024-11-26 23:57:28.097753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.097813] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.041 [2024-11-26 23:57:28.098218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.041 [2024-11-26 23:57:28.098249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.098264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.041 [2024-11-26 23:57:28.098282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:23:40.041 [2024-11-26 23:57:28.098292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.100500] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.041 [2024-11-26 23:57:28.105309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.105474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.041 [2024-11-26 23:57:28.105983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:23:40.041 [2024-11-26 23:57:28.106018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.106109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.106121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.041 [2024-11-26 23:57:28.106132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:40.041 [2024-11-26 23:57:28.106147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.117487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.117531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.041 [2024-11-26 23:57:28.117555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.284 ms 00:23:40.041 [2024-11-26 23:57:28.117563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.117676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.117687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.041 [2024-11-26 23:57:28.117696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:40.041 [2024-11-26 23:57:28.117704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.117775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.117826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.041 [2024-11-26 23:57:28.117844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:40.041 [2024-11-26 23:57:28.117856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.117881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.041 [2024-11-26 23:57:28.120515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.120687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.041 [2024-11-26 23:57:28.120705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:23:40.041 [2024-11-26 23:57:28.120720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.120772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.120788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.041 [2024-11-26 23:57:28.120817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:40.041 [2024-11-26 23:57:28.120829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.120850] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.041 [2024-11-26 23:57:28.120875] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.041 [2024-11-26 23:57:28.120921] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.041 [2024-11-26 23:57:28.120938] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:40.041 [2024-11-26 23:57:28.121050] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.041 [2024-11-26 23:57:28.121062] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.041 [2024-11-26 23:57:28.121077] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.041 [2024-11-26 23:57:28.121092] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121101] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121115] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:40.041 [2024-11-26 23:57:28.121123] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.041 [2024-11-26 23:57:28.121131] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.041 [2024-11-26 23:57:28.121139] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.041 [2024-11-26 23:57:28.121150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.121159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.041 [2024-11-26 23:57:28.121167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:23:40.041 [2024-11-26 23:57:28.121174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.121260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.041 [2024-11-26 23:57:28.121269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.041 [2024-11-26 23:57:28.121278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:40.041 [2024-11-26 23:57:28.121290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.041 [2024-11-26 23:57:28.121401] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.041 [2024-11-26 23:57:28.121413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.041 [2024-11-26 23:57:28.121423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.041 [2024-11-26 23:57:28.121454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.041 [2024-11-26 23:57:28.121484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.041 [2024-11-26 23:57:28.121501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.041 [2024-11-26 23:57:28.121508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:40.041 [2024-11-26 23:57:28.121516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.041 [2024-11-26 23:57:28.121524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.041 [2024-11-26 23:57:28.121532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:40.041 [2024-11-26 23:57:28.121539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.041 [2024-11-26 23:57:28.121558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.041 [2024-11-26 23:57:28.121583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.041 [2024-11-26 23:57:28.121609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.041 [2024-11-26 23:57:28.121630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.041 [2024-11-26 23:57:28.121651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.041 [2024-11-26 23:57:28.121664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.041 [2024-11-26 23:57:28.121670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:40.041 [2024-11-26 23:57:28.121676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.041 [2024-11-26 23:57:28.121682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.041 [2024-11-26 23:57:28.121689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:40.041 [2024-11-26 23:57:28.121696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.041 [2024-11-26 23:57:28.121702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.041 [2024-11-26 23:57:28.121708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:40.042 [2024-11-26 23:57:28.121717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.042 [2024-11-26 23:57:28.121724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.042 [2024-11-26 23:57:28.121731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:40.042 [2024-11-26 23:57:28.121737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.042 [2024-11-26 23:57:28.121745] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.042 [2024-11-26 23:57:28.121754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.042 [2024-11-26 23:57:28.121762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.042 [2024-11-26 23:57:28.121770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.042 [2024-11-26 23:57:28.121777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.042 [2024-11-26 23:57:28.121786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.042 [2024-11-26 23:57:28.121808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.042 [2024-11-26 23:57:28.121816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.042 [2024-11-26 23:57:28.121823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.042 [2024-11-26 23:57:28.121830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.042 [2024-11-26 23:57:28.121839] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.042 [2024-11-26 23:57:28.121848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.042 [2024-11-26 23:57:28.121860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:40.042 [2024-11-26 23:57:28.121868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:40.042 [2024-11-26 23:57:28.121877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:40.042 [2024-11-26 23:57:28.121884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:40.042 [2024-11-26 23:57:28.121891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:40.042 [2024-11-26 23:57:28.121900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:40.042 [2024-11-26 23:57:28.121908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:40.042 [2024-11-26 23:57:28.121915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:40.042 [2024-11-26 23:57:28.121948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:40.042 [2024-11-26 23:57:28.121971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:40.042 [2024-11-26 23:57:28.121978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:40.042 [2024-11-26 23:57:28.121986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:40.042 [2024-11-26 23:57:28.121994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:40.042 [2024-11-26 23:57:28.122002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:40.042 [2024-11-26 23:57:28.122008] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.042 [2024-11-26 23:57:28.122017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.042 [2024-11-26 23:57:28.122028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.042 [2024-11-26 23:57:28.122036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.042 [2024-11-26 23:57:28.122044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.042 [2024-11-26 23:57:28.122051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.042 [2024-11-26 23:57:28.122059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.042 [2024-11-26 23:57:28.122067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.042 [2024-11-26 23:57:28.122076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:23:40.042 [2024-11-26 23:57:28.122090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.042 [2024-11-26 23:57:28.141968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.042 [2024-11-26 23:57:28.142019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.042 [2024-11-26 23:57:28.142030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.813 ms 00:23:40.042 [2024-11-26 23:57:28.142038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.042 [2024-11-26 23:57:28.142140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.042 [2024-11-26 23:57:28.142148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.042 [2024-11-26 23:57:28.142157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:40.042 [2024-11-26 23:57:28.142167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.168495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.168569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.304 [2024-11-26 23:57:28.168592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.267 ms 00:23:40.304 [2024-11-26 23:57:28.168603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.168665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.168678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.304 [2024-11-26 23:57:28.168689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:40.304 [2024-11-26 23:57:28.168699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.169480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.169527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.304 [2024-11-26 23:57:28.169543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:23:40.304 [2024-11-26 23:57:28.169554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.169760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.169773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.304 [2024-11-26 23:57:28.169784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:23:40.304 [2024-11-26 23:57:28.169812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.180692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.180747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.304 [2024-11-26 23:57:28.180760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.853 ms 00:23:40.304 [2024-11-26 23:57:28.180773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.185536] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:40.304 [2024-11-26 23:57:28.185587] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.304 [2024-11-26 23:57:28.185605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.185614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.304 [2024-11-26 23:57:28.185624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.696 ms 00:23:40.304 [2024-11-26 23:57:28.185633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.201564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.201617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.304 [2024-11-26 23:57:28.201630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.877 ms 00:23:40.304 [2024-11-26 23:57:28.201639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.204508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.204561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.304 [2024-11-26 23:57:28.204571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:23:40.304 [2024-11-26 23:57:28.204579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.304 [2024-11-26 23:57:28.206981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.304 [2024-11-26 23:57:28.207159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.304 [2024-11-26 23:57:28.207177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:23:40.305 [2024-11-26 23:57:28.207186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.207529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.207543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.305 [2024-11-26 23:57:28.207552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:23:40.305 [2024-11-26 23:57:28.207564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.235921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.235982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.305 [2024-11-26 23:57:28.235997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.332 ms 00:23:40.305 [2024-11-26 23:57:28.236014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.244353] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:40.305 [2024-11-26 23:57:28.248022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.248065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.305 [2024-11-26 23:57:28.248077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.955 ms 00:23:40.305 [2024-11-26 23:57:28.248086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.248174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.248187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.305 [2024-11-26 23:57:28.248198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:40.305 [2024-11-26 23:57:28.248215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.250364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.250535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.305 [2024-11-26 23:57:28.250554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:23:40.305 [2024-11-26 23:57:28.250565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.250600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.250610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.305 [2024-11-26 23:57:28.250619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:40.305 [2024-11-26 23:57:28.250627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.250681] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.305 [2024-11-26 23:57:28.250693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.250702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.305 [2024-11-26 23:57:28.250715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:40.305 [2024-11-26 23:57:28.250727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.257111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.257270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.305 [2024-11-26 23:57:28.257300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.362 ms 00:23:40.305 [2024-11-26 23:57:28.257314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.257403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.305 [2024-11-26 23:57:28.257413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.305 [2024-11-26 23:57:28.257423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:40.305 [2024-11-26 23:57:28.257436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.305 [2024-11-26 23:57:28.258958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 160.961 ms, result 0 00:23:41.699  [2024-11-26T23:57:30.775Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-26T23:57:31.721Z] Copying: 28/1024 [MB] (14 MBps) [2024-11-26T23:57:32.667Z] Copying: 39/1024 [MB] (10 MBps) [2024-11-26T23:57:33.612Z] Copying: 56/1024 [MB] (17 MBps) [2024-11-26T23:57:34.557Z] Copying: 70/1024 [MB] (13 MBps) [2024-11-26T23:57:35.563Z] Copying: 87/1024 [MB] (16 MBps) [2024-11-26T23:57:36.504Z] Copying: 98/1024 [MB] (11 MBps) [2024-11-26T23:57:37.886Z] Copying: 117/1024 [MB] (19 MBps) [2024-11-26T23:57:38.462Z] Copying: 131/1024 [MB] (14 MBps) [2024-11-26T23:57:39.851Z] Copying: 142/1024 [MB] (10 MBps) [2024-11-26T23:57:40.794Z] Copying: 152/1024 [MB] (10 MBps) [2024-11-26T23:57:41.739Z] Copying: 163/1024 [MB] (11 MBps) [2024-11-26T23:57:42.683Z] Copying: 180/1024 [MB] (16 MBps) [2024-11-26T23:57:43.629Z] Copying: 191/1024 [MB] (10 MBps) [2024-11-26T23:57:44.575Z] Copying: 211/1024 [MB] (19 MBps) [2024-11-26T23:57:45.514Z] Copying: 223/1024 [MB] (11 MBps) [2024-11-26T23:57:46.457Z] Copying: 249/1024 [MB] (26 MBps) [2024-11-26T23:57:47.841Z] Copying: 269/1024 [MB] (19 MBps) [2024-11-26T23:57:48.782Z] Copying: 288/1024 [MB] (19 MBps) [2024-11-26T23:57:49.734Z] Copying: 302/1024 [MB] (13 MBps) [2024-11-26T23:57:50.680Z] Copying: 323/1024 [MB] (21 MBps) [2024-11-26T23:57:51.621Z] Copying: 347/1024 [MB] (23 MBps) [2024-11-26T23:57:52.565Z] Copying: 368/1024 [MB] (21 MBps) [2024-11-26T23:57:53.510Z] Copying: 390/1024 [MB] (22 MBps) [2024-11-26T23:57:54.455Z] Copying: 409/1024 [MB] (18 MBps) [2024-11-26T23:57:55.841Z] Copying: 427/1024 [MB] (17 MBps) [2024-11-26T23:57:56.783Z] Copying: 438/1024 [MB] (10 MBps) [2024-11-26T23:57:57.726Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-26T23:57:58.668Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-26T23:57:59.611Z] Copying: 470/1024 [MB] (10 MBps) [2024-11-26T23:58:00.556Z] Copying: 483/1024 [MB] (13 MBps) [2024-11-26T23:58:01.502Z] Copying: 496/1024 [MB] (12 MBps) [2024-11-26T23:58:02.893Z] Copying: 510/1024 [MB] (13 MBps) [2024-11-26T23:58:03.467Z] Copying: 524/1024 [MB] (14 MBps) [2024-11-26T23:58:04.856Z] Copying: 541/1024 [MB] (16 MBps) [2024-11-26T23:58:05.802Z] Copying: 551/1024 [MB] (10 MBps) [2024-11-26T23:58:06.749Z] Copying: 563/1024 [MB] (11 MBps) [2024-11-26T23:58:07.792Z] Copying: 574/1024 [MB] (11 MBps) [2024-11-26T23:58:08.735Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-26T23:58:09.692Z] Copying: 597/1024 [MB] (12 MBps) [2024-11-26T23:58:10.638Z] Copying: 610/1024 [MB] (13 MBps) [2024-11-26T23:58:11.584Z] Copying: 626/1024 [MB] (15 MBps) [2024-11-26T23:58:12.531Z] Copying: 638/1024 [MB] (11 MBps) [2024-11-26T23:58:13.472Z] Copying: 655/1024 [MB] (16 MBps) [2024-11-26T23:58:14.858Z] Copying: 670/1024 [MB] (14 MBps) [2024-11-26T23:58:15.803Z] Copying: 682/1024 [MB] (12 MBps) [2024-11-26T23:58:16.747Z] Copying: 694/1024 [MB] (11 MBps) [2024-11-26T23:58:17.690Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-26T23:58:18.632Z] Copying: 716/1024 [MB] (11 MBps) [2024-11-26T23:58:19.575Z] Copying: 728/1024 [MB] (11 MBps) [2024-11-26T23:58:20.521Z] Copying: 741/1024 [MB] (13 MBps) [2024-11-26T23:58:21.465Z] Copying: 751/1024 [MB] (10 MBps) [2024-11-26T23:58:22.853Z] Copying: 763/1024 [MB] (11 MBps) [2024-11-26T23:58:23.796Z] Copying: 773/1024 [MB] (10 MBps) [2024-11-26T23:58:24.740Z] Copying: 786/1024 [MB] (12 MBps) [2024-11-26T23:58:25.686Z] Copying: 802/1024 [MB] (15 MBps) [2024-11-26T23:58:26.630Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-26T23:58:27.573Z] Copying: 832/1024 [MB] (19 MBps) [2024-11-26T23:58:28.517Z] Copying: 850/1024 [MB] (18 MBps) [2024-11-26T23:58:29.460Z] Copying: 865/1024 [MB] (15 MBps) [2024-11-26T23:58:30.847Z] Copying: 877/1024 [MB] (11 MBps) [2024-11-26T23:58:31.790Z] Copying: 893/1024 [MB] (15 MBps) [2024-11-26T23:58:32.729Z] Copying: 913/1024 [MB] (19 MBps) [2024-11-26T23:58:33.673Z] Copying: 927/1024 [MB] (14 MBps) [2024-11-26T23:58:34.617Z] Copying: 945/1024 [MB] (17 MBps) [2024-11-26T23:58:35.560Z] Copying: 963/1024 [MB] (18 MBps) [2024-11-26T23:58:36.503Z] Copying: 983/1024 [MB] (19 MBps) [2024-11-26T23:58:37.890Z] Copying: 1003/1024 [MB] (20 MBps) [2024-11-26T23:58:37.890Z] Copying: 1022/1024 [MB] (19 MBps) [2024-11-26T23:58:37.890Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-26 23:58:37.591342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.759 [2024-11-26 23:58:37.591434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.759 [2024-11-26 23:58:37.591457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:49.759 [2024-11-26 23:58:37.591470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.759 [2024-11-26 23:58:37.591504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.759 [2024-11-26 23:58:37.592572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.759 [2024-11-26 23:58:37.592626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.759 [2024-11-26 23:58:37.592650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:24:49.759 [2024-11-26 23:58:37.592663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.759 [2024-11-26 23:58:37.593004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.759 [2024-11-26 23:58:37.593032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.759 [2024-11-26 23:58:37.593050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:24:49.759 [2024-11-26 23:58:37.593062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.759 [2024-11-26 23:58:37.599643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.759 [2024-11-26 23:58:37.599699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:49.759 [2024-11-26 23:58:37.599712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.553 ms 00:24:49.759 [2024-11-26 23:58:37.599721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.759 [2024-11-26 23:58:37.607784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.759 [2024-11-26 23:58:37.607847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:49.759 [2024-11-26 23:58:37.607860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.002 ms 00:24:49.759 [2024-11-26 23:58:37.607870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.759 [2024-11-26 23:58:37.610768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.759 [2024-11-26 23:58:37.610838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:49.759 [2024-11-26 23:58:37.610850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:24:49.759 [2024-11-26 23:58:37.610858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.759 [2024-11-26 23:58:37.616362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.759 [2024-11-26 23:58:37.616419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:49.759 [2024-11-26 23:58:37.616433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.455 ms 00:24:49.759 [2024-11-26 23:58:37.616451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.022 [2024-11-26 23:58:37.977293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.022 [2024-11-26 23:58:37.977369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:50.022 [2024-11-26 23:58:37.977385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 360.788 ms 00:24:50.022 [2024-11-26 23:58:37.977395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.022 [2024-11-26 23:58:37.980959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.022 [2024-11-26 23:58:37.981009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:50.022 [2024-11-26 23:58:37.981021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:24:50.022 [2024-11-26 23:58:37.981032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.022 [2024-11-26 23:58:37.983968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.022 [2024-11-26 23:58:37.984015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:50.022 [2024-11-26 23:58:37.984026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:24:50.022 [2024-11-26 23:58:37.984034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.022 [2024-11-26 23:58:37.986280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.022 [2024-11-26 23:58:37.986327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:50.022 [2024-11-26 23:58:37.986338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:24:50.022 [2024-11-26 23:58:37.986346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.022 [2024-11-26 23:58:37.988456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.022 [2024-11-26 23:58:37.988506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:50.022 [2024-11-26 23:58:37.988518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:24:50.022 [2024-11-26 23:58:37.988526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.022 [2024-11-26 23:58:37.988568] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:50.022 [2024-11-26 23:58:37.988587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:50.022 [2024-11-26 23:58:37.988599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:50.022 [2024-11-26 23:58:37.988754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.988989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:50.023 [2024-11-26 23:58:37.989480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:50.023 [2024-11-26 23:58:37.989488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b112cea9-c2c5-4cd0-a5f8-134f526173e0 00:24:50.023 [2024-11-26 23:58:37.989506] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:50.023 [2024-11-26 23:58:37.989523] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30912 00:24:50.023 [2024-11-26 23:58:37.989536] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29952 00:24:50.023 [2024-11-26 23:58:37.989545] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0321 00:24:50.023 [2024-11-26 23:58:37.989553] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:50.023 [2024-11-26 23:58:37.989562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:50.023 [2024-11-26 23:58:37.989570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:50.023 [2024-11-26 23:58:37.989577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:50.023 [2024-11-26 23:58:37.989584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:50.023 [2024-11-26 23:58:37.989592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.023 [2024-11-26 23:58:37.989601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:50.024 [2024-11-26 23:58:37.989609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:24:50.024 [2024-11-26 23:58:37.989618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:37.992780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.024 [2024-11-26 23:58:37.992840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:50.024 [2024-11-26 23:58:37.992851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:24:50.024 [2024-11-26 23:58:37.992861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:37.993015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.024 [2024-11-26 23:58:37.993026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:50.024 [2024-11-26 23:58:37.993040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:24:50.024 [2024-11-26 23:58:37.993049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.003309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.003525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.024 [2024-11-26 23:58:38.003547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.003558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.003637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.003649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.024 [2024-11-26 23:58:38.003658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.003666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.003759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.003771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.024 [2024-11-26 23:58:38.003781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.003814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.003833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.003843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.024 [2024-11-26 23:58:38.003851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.003867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.023479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.023557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.024 [2024-11-26 23:58:38.023569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.023578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.038452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.038679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.024 [2024-11-26 23:58:38.038699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.038710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.038810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.038824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.024 [2024-11-26 23:58:38.038834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.038843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.038890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.038901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.024 [2024-11-26 23:58:38.038910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.038919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.039017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.039032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.024 [2024-11-26 23:58:38.039041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.039050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.039084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.039095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:50.024 [2024-11-26 23:58:38.039104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.039113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.039166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.039181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.024 [2024-11-26 23:58:38.039191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.039200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.039258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.024 [2024-11-26 23:58:38.039270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.024 [2024-11-26 23:58:38.039278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.024 [2024-11-26 23:58:38.039288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.024 [2024-11-26 23:58:38.039468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 448.084 ms, result 0 00:24:50.285 00:24:50.285 00:24:50.285 23:58:38 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:52.891 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:52.891 Process with pid 88270 is not found 00:24:52.891 Remove shared memory files 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88270 00:24:52.891 23:58:40 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88270 ']' 00:24:52.891 23:58:40 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88270 00:24:52.891 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88270) - No such process 00:24:52.891 23:58:40 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88270 is not found' 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:52.891 23:58:40 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:52.891 ************************************ 00:24:52.891 END TEST ftl_restore 00:24:52.891 ************************************ 00:24:52.891 00:24:52.891 real 4m31.709s 00:24:52.891 user 4m19.064s 00:24:52.891 sys 0m12.518s 00:24:52.891 23:58:40 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.891 23:58:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:52.891 23:58:40 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:52.891 23:58:40 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:52.891 23:58:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.891 23:58:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:52.891 ************************************ 00:24:52.891 START TEST ftl_dirty_shutdown 00:24:52.891 ************************************ 00:24:52.891 23:58:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:52.891 * Looking for test storage... 00:24:52.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:52.891 23:58:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:52.891 23:58:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:52.892 23:58:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.892 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.153 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:53.153 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:53.153 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.154 --rc genhtml_branch_coverage=1 00:24:53.154 --rc genhtml_function_coverage=1 00:24:53.154 --rc genhtml_legend=1 00:24:53.154 --rc geninfo_all_blocks=1 00:24:53.154 --rc geninfo_unexecuted_blocks=1 00:24:53.154 00:24:53.154 ' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.154 --rc genhtml_branch_coverage=1 00:24:53.154 --rc genhtml_function_coverage=1 00:24:53.154 --rc genhtml_legend=1 00:24:53.154 --rc geninfo_all_blocks=1 00:24:53.154 --rc geninfo_unexecuted_blocks=1 00:24:53.154 00:24:53.154 ' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.154 --rc genhtml_branch_coverage=1 00:24:53.154 --rc genhtml_function_coverage=1 00:24:53.154 --rc genhtml_legend=1 00:24:53.154 --rc geninfo_all_blocks=1 00:24:53.154 --rc geninfo_unexecuted_blocks=1 00:24:53.154 00:24:53.154 ' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.154 --rc genhtml_branch_coverage=1 00:24:53.154 --rc genhtml_function_coverage=1 00:24:53.154 --rc genhtml_legend=1 00:24:53.154 --rc geninfo_all_blocks=1 00:24:53.154 --rc geninfo_unexecuted_blocks=1 00:24:53.154 00:24:53.154 ' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91142 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91142 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91142 ']' 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.154 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:53.154 [2024-11-26 23:58:41.146276] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:24:53.154 [2024-11-26 23:58:41.146695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91142 ] 00:24:53.415 [2024-11-26 23:58:41.295230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.415 [2024-11-26 23:58:41.337188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.989 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.989 23:58:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:53.989 23:58:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:53.989 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:53.989 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:53.989 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:53.989 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:53.989 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:54.251 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:54.251 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:54.251 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:54.251 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:54.251 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:54.251 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:54.251 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:54.251 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:54.513 { 00:24:54.513 "name": "nvme0n1", 00:24:54.513 "aliases": [ 00:24:54.513 "335dad35-c564-44cb-a40d-2817ace69ea4" 00:24:54.513 ], 00:24:54.513 "product_name": "NVMe disk", 00:24:54.513 "block_size": 4096, 00:24:54.513 "num_blocks": 1310720, 00:24:54.513 "uuid": "335dad35-c564-44cb-a40d-2817ace69ea4", 00:24:54.513 "numa_id": -1, 00:24:54.513 "assigned_rate_limits": { 00:24:54.513 "rw_ios_per_sec": 0, 00:24:54.513 "rw_mbytes_per_sec": 0, 00:24:54.513 "r_mbytes_per_sec": 0, 00:24:54.513 "w_mbytes_per_sec": 0 00:24:54.513 }, 00:24:54.513 "claimed": true, 00:24:54.513 "claim_type": "read_many_write_one", 00:24:54.513 "zoned": false, 00:24:54.513 "supported_io_types": { 00:24:54.513 "read": true, 00:24:54.513 "write": true, 00:24:54.513 "unmap": true, 00:24:54.513 "flush": true, 00:24:54.513 "reset": true, 00:24:54.513 "nvme_admin": true, 00:24:54.513 "nvme_io": true, 00:24:54.513 "nvme_io_md": false, 00:24:54.513 "write_zeroes": true, 00:24:54.513 "zcopy": false, 00:24:54.513 "get_zone_info": false, 00:24:54.513 "zone_management": false, 00:24:54.513 "zone_append": false, 00:24:54.513 "compare": true, 00:24:54.513 "compare_and_write": false, 00:24:54.513 "abort": true, 00:24:54.513 "seek_hole": false, 00:24:54.513 "seek_data": false, 00:24:54.513 "copy": true, 00:24:54.513 "nvme_iov_md": false 00:24:54.513 }, 00:24:54.513 "driver_specific": { 00:24:54.513 "nvme": [ 00:24:54.513 { 00:24:54.513 "pci_address": "0000:00:11.0", 00:24:54.513 "trid": { 00:24:54.513 "trtype": "PCIe", 00:24:54.513 "traddr": "0000:00:11.0" 00:24:54.513 }, 00:24:54.513 "ctrlr_data": { 00:24:54.513 "cntlid": 0, 00:24:54.513 "vendor_id": "0x1b36", 00:24:54.513 "model_number": "QEMU NVMe Ctrl", 00:24:54.513 "serial_number": "12341", 00:24:54.513 "firmware_revision": "8.0.0", 00:24:54.513 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:54.513 "oacs": { 00:24:54.513 "security": 0, 00:24:54.513 "format": 1, 00:24:54.513 "firmware": 0, 00:24:54.513 "ns_manage": 1 00:24:54.513 }, 00:24:54.513 "multi_ctrlr": false, 00:24:54.513 "ana_reporting": false 00:24:54.513 }, 00:24:54.513 "vs": { 00:24:54.513 "nvme_version": "1.4" 00:24:54.513 }, 00:24:54.513 "ns_data": { 00:24:54.513 "id": 1, 00:24:54.513 "can_share": false 00:24:54.513 } 00:24:54.513 } 00:24:54.513 ], 00:24:54.513 "mp_policy": "active_passive" 00:24:54.513 } 00:24:54.513 } 00:24:54.513 ]' 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:54.513 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:54.775 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=56b1d744-844f-428d-ae2b-cb39ccec7821 00:24:54.775 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:54.775 23:58:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56b1d744-844f-428d-ae2b-cb39ccec7821 00:24:55.037 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:55.300 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=8ce5df7d-3ff1-4c74-8280-631546b83492 00:24:55.300 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8ce5df7d-3ff1-4c74-8280-631546b83492 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:55.561 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:55.820 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:55.820 { 00:24:55.820 "name": "357bf0ba-c2e2-45ca-9e78-18fc5afa43de", 00:24:55.820 "aliases": [ 00:24:55.820 "lvs/nvme0n1p0" 00:24:55.820 ], 00:24:55.820 "product_name": "Logical Volume", 00:24:55.820 "block_size": 4096, 00:24:55.821 "num_blocks": 26476544, 00:24:55.821 "uuid": "357bf0ba-c2e2-45ca-9e78-18fc5afa43de", 00:24:55.821 "assigned_rate_limits": { 00:24:55.821 "rw_ios_per_sec": 0, 00:24:55.821 "rw_mbytes_per_sec": 0, 00:24:55.821 "r_mbytes_per_sec": 0, 00:24:55.821 "w_mbytes_per_sec": 0 00:24:55.821 }, 00:24:55.821 "claimed": false, 00:24:55.821 "zoned": false, 00:24:55.821 "supported_io_types": { 00:24:55.821 "read": true, 00:24:55.821 "write": true, 00:24:55.821 "unmap": true, 00:24:55.821 "flush": false, 00:24:55.821 "reset": true, 00:24:55.821 "nvme_admin": false, 00:24:55.821 "nvme_io": false, 00:24:55.821 "nvme_io_md": false, 00:24:55.821 "write_zeroes": true, 00:24:55.821 "zcopy": false, 00:24:55.821 "get_zone_info": false, 00:24:55.821 "zone_management": false, 00:24:55.821 "zone_append": false, 00:24:55.821 "compare": false, 00:24:55.821 "compare_and_write": false, 00:24:55.821 "abort": false, 00:24:55.821 "seek_hole": true, 00:24:55.821 "seek_data": true, 00:24:55.821 "copy": false, 00:24:55.821 "nvme_iov_md": false 00:24:55.821 }, 00:24:55.821 "driver_specific": { 00:24:55.821 "lvol": { 00:24:55.821 "lvol_store_uuid": "8ce5df7d-3ff1-4c74-8280-631546b83492", 00:24:55.821 "base_bdev": "nvme0n1", 00:24:55.821 "thin_provision": true, 00:24:55.821 "num_allocated_clusters": 0, 00:24:55.821 "snapshot": false, 00:24:55.821 "clone": false, 00:24:55.821 "esnap_clone": false 00:24:55.821 } 00:24:55.821 } 00:24:55.821 } 00:24:55.821 ]' 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:55.821 23:58:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:56.079 23:58:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:56.079 23:58:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:56.079 23:58:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:56.079 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:56.079 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:56.079 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:56.079 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:56.079 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:56.338 { 00:24:56.338 "name": "357bf0ba-c2e2-45ca-9e78-18fc5afa43de", 00:24:56.338 "aliases": [ 00:24:56.338 "lvs/nvme0n1p0" 00:24:56.338 ], 00:24:56.338 "product_name": "Logical Volume", 00:24:56.338 "block_size": 4096, 00:24:56.338 "num_blocks": 26476544, 00:24:56.338 "uuid": "357bf0ba-c2e2-45ca-9e78-18fc5afa43de", 00:24:56.338 "assigned_rate_limits": { 00:24:56.338 "rw_ios_per_sec": 0, 00:24:56.338 "rw_mbytes_per_sec": 0, 00:24:56.338 "r_mbytes_per_sec": 0, 00:24:56.338 "w_mbytes_per_sec": 0 00:24:56.338 }, 00:24:56.338 "claimed": false, 00:24:56.338 "zoned": false, 00:24:56.338 "supported_io_types": { 00:24:56.338 "read": true, 00:24:56.338 "write": true, 00:24:56.338 "unmap": true, 00:24:56.338 "flush": false, 00:24:56.338 "reset": true, 00:24:56.338 "nvme_admin": false, 00:24:56.338 "nvme_io": false, 00:24:56.338 "nvme_io_md": false, 00:24:56.338 "write_zeroes": true, 00:24:56.338 "zcopy": false, 00:24:56.338 "get_zone_info": false, 00:24:56.338 "zone_management": false, 00:24:56.338 "zone_append": false, 00:24:56.338 "compare": false, 00:24:56.338 "compare_and_write": false, 00:24:56.338 "abort": false, 00:24:56.338 "seek_hole": true, 00:24:56.338 "seek_data": true, 00:24:56.338 "copy": false, 00:24:56.338 "nvme_iov_md": false 00:24:56.338 }, 00:24:56.338 "driver_specific": { 00:24:56.338 "lvol": { 00:24:56.338 "lvol_store_uuid": "8ce5df7d-3ff1-4c74-8280-631546b83492", 00:24:56.338 "base_bdev": "nvme0n1", 00:24:56.338 "thin_provision": true, 00:24:56.338 "num_allocated_clusters": 0, 00:24:56.338 "snapshot": false, 00:24:56.338 "clone": false, 00:24:56.338 "esnap_clone": false 00:24:56.338 } 00:24:56.338 } 00:24:56.338 } 00:24:56.338 ]' 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:56.338 23:58:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:56.596 23:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:56.596 23:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:56.596 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:56.596 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:56.596 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:56.596 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:56.597 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 357bf0ba-c2e2-45ca-9e78-18fc5afa43de 00:24:56.855 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:56.855 { 00:24:56.855 "name": "357bf0ba-c2e2-45ca-9e78-18fc5afa43de", 00:24:56.855 "aliases": [ 00:24:56.855 "lvs/nvme0n1p0" 00:24:56.855 ], 00:24:56.855 "product_name": "Logical Volume", 00:24:56.855 "block_size": 4096, 00:24:56.855 "num_blocks": 26476544, 00:24:56.855 "uuid": "357bf0ba-c2e2-45ca-9e78-18fc5afa43de", 00:24:56.855 "assigned_rate_limits": { 00:24:56.855 "rw_ios_per_sec": 0, 00:24:56.855 "rw_mbytes_per_sec": 0, 00:24:56.855 "r_mbytes_per_sec": 0, 00:24:56.855 "w_mbytes_per_sec": 0 00:24:56.855 }, 00:24:56.855 "claimed": false, 00:24:56.855 "zoned": false, 00:24:56.855 "supported_io_types": { 00:24:56.855 "read": true, 00:24:56.855 "write": true, 00:24:56.855 "unmap": true, 00:24:56.855 "flush": false, 00:24:56.855 "reset": true, 00:24:56.855 "nvme_admin": false, 00:24:56.855 "nvme_io": false, 00:24:56.855 "nvme_io_md": false, 00:24:56.855 "write_zeroes": true, 00:24:56.855 "zcopy": false, 00:24:56.855 "get_zone_info": false, 00:24:56.855 "zone_management": false, 00:24:56.855 "zone_append": false, 00:24:56.855 "compare": false, 00:24:56.855 "compare_and_write": false, 00:24:56.855 "abort": false, 00:24:56.855 "seek_hole": true, 00:24:56.855 "seek_data": true, 00:24:56.856 "copy": false, 00:24:56.856 "nvme_iov_md": false 00:24:56.856 }, 00:24:56.856 "driver_specific": { 00:24:56.856 "lvol": { 00:24:56.856 "lvol_store_uuid": "8ce5df7d-3ff1-4c74-8280-631546b83492", 00:24:56.856 "base_bdev": "nvme0n1", 00:24:56.856 "thin_provision": true, 00:24:56.856 "num_allocated_clusters": 0, 00:24:56.856 "snapshot": false, 00:24:56.856 "clone": false, 00:24:56.856 "esnap_clone": false 00:24:56.856 } 00:24:56.856 } 00:24:56.856 } 00:24:56.856 ]' 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 357bf0ba-c2e2-45ca-9e78-18fc5afa43de --l2p_dram_limit 10' 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:56.856 23:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 357bf0ba-c2e2-45ca-9e78-18fc5afa43de --l2p_dram_limit 10 -c nvc0n1p0 00:24:57.117 [2024-11-26 23:58:44.996003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:44.996048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:57.117 [2024-11-26 23:58:44.996060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:57.117 [2024-11-26 23:58:44.996071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:44.996121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:44.996133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.117 [2024-11-26 23:58:44.996139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:57.117 [2024-11-26 23:58:44.996150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:44.996165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:57.117 [2024-11-26 23:58:44.996379] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:57.117 [2024-11-26 23:58:44.996391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:44.996399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.117 [2024-11-26 23:58:44.996406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:24:57.117 [2024-11-26 23:58:44.996417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:44.996467] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 549601c2-4d66-4189-bcb6-0dad7a61c53b 00:24:57.117 [2024-11-26 23:58:44.997749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:44.997860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:57.117 [2024-11-26 23:58:44.997876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:57.117 [2024-11-26 23:58:44.997883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:45.004763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:45.004801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.117 [2024-11-26 23:58:45.004811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:24:57.117 [2024-11-26 23:58:45.004822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:45.004889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:45.004896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.117 [2024-11-26 23:58:45.004905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:57.117 [2024-11-26 23:58:45.004910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:45.004945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:45.004953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:57.117 [2024-11-26 23:58:45.004962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:57.117 [2024-11-26 23:58:45.004974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:45.004993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:57.117 [2024-11-26 23:58:45.006650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:45.006677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.117 [2024-11-26 23:58:45.006685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:24:57.117 [2024-11-26 23:58:45.006692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:45.006719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.117 [2024-11-26 23:58:45.006727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:57.117 [2024-11-26 23:58:45.006737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:57.117 [2024-11-26 23:58:45.006747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.117 [2024-11-26 23:58:45.006770] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:57.117 [2024-11-26 23:58:45.006903] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:57.117 [2024-11-26 23:58:45.006916] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:57.117 [2024-11-26 23:58:45.006927] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:57.118 [2024-11-26 23:58:45.006935] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:57.118 [2024-11-26 23:58:45.006946] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:57.118 [2024-11-26 23:58:45.006957] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:57.118 [2024-11-26 23:58:45.006966] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:57.118 [2024-11-26 23:58:45.006972] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:57.118 [2024-11-26 23:58:45.006979] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:57.118 [2024-11-26 23:58:45.006985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.118 [2024-11-26 23:58:45.006993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:57.118 [2024-11-26 23:58:45.007004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:24:57.118 [2024-11-26 23:58:45.007011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.118 [2024-11-26 23:58:45.007076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.118 [2024-11-26 23:58:45.007086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:57.118 [2024-11-26 23:58:45.007092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:57.118 [2024-11-26 23:58:45.007102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.118 [2024-11-26 23:58:45.007174] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:57.118 [2024-11-26 23:58:45.007183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:57.118 [2024-11-26 23:58:45.007189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:57.118 [2024-11-26 23:58:45.007210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:57.118 [2024-11-26 23:58:45.007228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:57.118 [2024-11-26 23:58:45.007240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:57.118 [2024-11-26 23:58:45.007246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:57.118 [2024-11-26 23:58:45.007251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:57.118 [2024-11-26 23:58:45.007260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:57.118 [2024-11-26 23:58:45.007265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:57.118 [2024-11-26 23:58:45.007271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:57.118 [2024-11-26 23:58:45.007282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:57.118 [2024-11-26 23:58:45.007300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:57.118 [2024-11-26 23:58:45.007323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:57.118 [2024-11-26 23:58:45.007343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:57.118 [2024-11-26 23:58:45.007367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:57.118 [2024-11-26 23:58:45.007386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:57.118 [2024-11-26 23:58:45.007400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:57.118 [2024-11-26 23:58:45.007409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:57.118 [2024-11-26 23:58:45.007414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:57.118 [2024-11-26 23:58:45.007422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:57.118 [2024-11-26 23:58:45.007428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:57.118 [2024-11-26 23:58:45.007435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:57.118 [2024-11-26 23:58:45.007447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:57.118 [2024-11-26 23:58:45.007453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007460] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:57.118 [2024-11-26 23:58:45.007472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:57.118 [2024-11-26 23:58:45.007482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.118 [2024-11-26 23:58:45.007498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:57.118 [2024-11-26 23:58:45.007504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:57.118 [2024-11-26 23:58:45.007511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:57.118 [2024-11-26 23:58:45.007517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:57.118 [2024-11-26 23:58:45.007525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:57.118 [2024-11-26 23:58:45.007531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:57.118 [2024-11-26 23:58:45.007541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:57.118 [2024-11-26 23:58:45.007550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.118 [2024-11-26 23:58:45.007560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:57.118 [2024-11-26 23:58:45.007567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:57.118 [2024-11-26 23:58:45.007575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:57.118 [2024-11-26 23:58:45.007581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:57.118 [2024-11-26 23:58:45.007589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:57.118 [2024-11-26 23:58:45.007595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:57.118 [2024-11-26 23:58:45.007605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:57.118 [2024-11-26 23:58:45.007611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:57.118 [2024-11-26 23:58:45.007619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:57.118 [2024-11-26 23:58:45.007625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:57.118 [2024-11-26 23:58:45.007633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:57.118 [2024-11-26 23:58:45.007639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:57.118 [2024-11-26 23:58:45.007646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:57.118 [2024-11-26 23:58:45.007653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:57.118 [2024-11-26 23:58:45.007661] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:57.118 [2024-11-26 23:58:45.007668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.118 [2024-11-26 23:58:45.007677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:57.118 [2024-11-26 23:58:45.007683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:57.118 [2024-11-26 23:58:45.007691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:57.118 [2024-11-26 23:58:45.007698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:57.118 [2024-11-26 23:58:45.007706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.118 [2024-11-26 23:58:45.007713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:57.118 [2024-11-26 23:58:45.007722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:24:57.118 [2024-11-26 23:58:45.007729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.118 [2024-11-26 23:58:45.007761] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:57.118 [2024-11-26 23:58:45.007769] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:00.411 [2024-11-26 23:58:48.202326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.202398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:00.411 [2024-11-26 23:58:48.202418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3194.546 ms 00:25:00.411 [2024-11-26 23:58:48.202434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.213600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.213645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.411 [2024-11-26 23:58:48.213665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.073 ms 00:25:00.411 [2024-11-26 23:58:48.213675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.213782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.213812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:00.411 [2024-11-26 23:58:48.213823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:00.411 [2024-11-26 23:58:48.213831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.224598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.224637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.411 [2024-11-26 23:58:48.224651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.717 ms 00:25:00.411 [2024-11-26 23:58:48.224662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.224693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.224701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.411 [2024-11-26 23:58:48.224713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:00.411 [2024-11-26 23:58:48.224720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.225190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.225207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.411 [2024-11-26 23:58:48.225219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:25:00.411 [2024-11-26 23:58:48.225228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.225356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.225382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.411 [2024-11-26 23:58:48.225394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:00.411 [2024-11-26 23:58:48.225408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.232802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.232833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.411 [2024-11-26 23:58:48.232852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.357 ms 00:25:00.411 [2024-11-26 23:58:48.232860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.261534] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:00.411 [2024-11-26 23:58:48.265086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.265123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:00.411 [2024-11-26 23:58:48.265136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.162 ms 00:25:00.411 [2024-11-26 23:58:48.265146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.325581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.325677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:00.411 [2024-11-26 23:58:48.325701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.393 ms 00:25:00.411 [2024-11-26 23:58:48.325724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.326097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.326132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:00.411 [2024-11-26 23:58:48.326149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:25:00.411 [2024-11-26 23:58:48.326172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.331634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.331756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:00.411 [2024-11-26 23:58:48.331846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.382 ms 00:25:00.411 [2024-11-26 23:58:48.331909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.336047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.336181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:00.411 [2024-11-26 23:58:48.336257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:25:00.411 [2024-11-26 23:58:48.336282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.336606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.336645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:00.411 [2024-11-26 23:58:48.336667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:25:00.411 [2024-11-26 23:58:48.336740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.369307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.369447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:00.411 [2024-11-26 23:58:48.369507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.494 ms 00:25:00.411 [2024-11-26 23:58:48.369535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.375287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.375406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:00.411 [2024-11-26 23:58:48.375458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.693 ms 00:25:00.411 [2024-11-26 23:58:48.375485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.411 [2024-11-26 23:58:48.379819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.411 [2024-11-26 23:58:48.379931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:00.412 [2024-11-26 23:58:48.379981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.246 ms 00:25:00.412 [2024-11-26 23:58:48.380008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.412 [2024-11-26 23:58:48.384819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.412 [2024-11-26 23:58:48.384934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:00.412 [2024-11-26 23:58:48.384985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:25:00.412 [2024-11-26 23:58:48.385015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.412 [2024-11-26 23:58:48.385087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.412 [2024-11-26 23:58:48.385120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:00.412 [2024-11-26 23:58:48.385146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:00.412 [2024-11-26 23:58:48.385168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.412 [2024-11-26 23:58:48.385318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.412 [2024-11-26 23:58:48.385358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:00.412 [2024-11-26 23:58:48.385380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:00.412 [2024-11-26 23:58:48.385446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.412 [2024-11-26 23:58:48.386645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3390.124 ms, result 0 00:25:00.412 { 00:25:00.412 "name": "ftl0", 00:25:00.412 "uuid": "549601c2-4d66-4189-bcb6-0dad7a61c53b" 00:25:00.412 } 00:25:00.412 23:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:00.412 23:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:00.671 23:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:00.671 23:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:00.671 23:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:00.930 /dev/nbd0 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:00.930 1+0 records in 00:25:00.930 1+0 records out 00:25:00.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458175 s, 8.9 MB/s 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:00.930 23:58:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:00.930 [2024-11-26 23:58:48.994847] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:25:00.930 [2024-11-26 23:58:48.995284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91284 ] 00:25:01.192 [2024-11-26 23:58:49.153978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.192 [2024-11-26 23:58:49.195197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.585  [2024-11-26T23:58:51.661Z] Copying: 185/1024 [MB] (185 MBps) [2024-11-26T23:58:52.602Z] Copying: 374/1024 [MB] (189 MBps) [2024-11-26T23:58:53.538Z] Copying: 594/1024 [MB] (219 MBps) [2024-11-26T23:58:54.104Z] Copying: 853/1024 [MB] (259 MBps) [2024-11-26T23:58:54.362Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:25:06.231 00:25:06.231 23:58:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:08.135 23:58:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:08.135 [2024-11-26 23:58:56.135160] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:25:08.135 [2024-11-26 23:58:56.135923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91361 ] 00:25:08.398 [2024-11-26 23:58:56.289338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.398 [2024-11-26 23:58:56.307069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.343  [2024-11-26T23:58:58.418Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-26T23:58:59.360Z] Copying: 25/1024 [MB] (12 MBps) [2024-11-26T23:59:00.748Z] Copying: 35744/1048576 [kB] (9264 kBps) [2024-11-26T23:59:01.691Z] Copying: 49/1024 [MB] (14 MBps) [2024-11-26T23:59:02.628Z] Copying: 62/1024 [MB] (13 MBps) [2024-11-26T23:59:03.574Z] Copying: 97/1024 [MB] (34 MBps) [2024-11-26T23:59:04.515Z] Copying: 116/1024 [MB] (19 MBps) [2024-11-26T23:59:05.452Z] Copying: 134/1024 [MB] (17 MBps) [2024-11-26T23:59:06.389Z] Copying: 163/1024 [MB] (28 MBps) [2024-11-26T23:59:07.778Z] Copying: 193/1024 [MB] (30 MBps) [2024-11-26T23:59:08.720Z] Copying: 210/1024 [MB] (16 MBps) [2024-11-26T23:59:09.662Z] Copying: 227/1024 [MB] (17 MBps) [2024-11-26T23:59:10.605Z] Copying: 244/1024 [MB] (16 MBps) [2024-11-26T23:59:11.655Z] Copying: 267/1024 [MB] (22 MBps) [2024-11-26T23:59:12.600Z] Copying: 280/1024 [MB] (13 MBps) [2024-11-26T23:59:13.544Z] Copying: 298/1024 [MB] (17 MBps) [2024-11-26T23:59:14.482Z] Copying: 312/1024 [MB] (14 MBps) [2024-11-26T23:59:15.426Z] Copying: 327/1024 [MB] (15 MBps) [2024-11-26T23:59:16.369Z] Copying: 345/1024 [MB] (17 MBps) [2024-11-26T23:59:17.755Z] Copying: 365/1024 [MB] (19 MBps) [2024-11-26T23:59:18.693Z] Copying: 378/1024 [MB] (13 MBps) [2024-11-26T23:59:19.637Z] Copying: 398/1024 [MB] (19 MBps) [2024-11-26T23:59:20.581Z] Copying: 414/1024 [MB] (16 MBps) [2024-11-26T23:59:21.526Z] Copying: 433/1024 [MB] (19 MBps) [2024-11-26T23:59:22.459Z] Copying: 448/1024 [MB] (14 MBps) [2024-11-26T23:59:23.397Z] Copying: 476/1024 [MB] (27 MBps) [2024-11-26T23:59:24.775Z] Copying: 504/1024 [MB] (28 MBps) [2024-11-26T23:59:25.710Z] Copying: 524/1024 [MB] (19 MBps) [2024-11-26T23:59:26.653Z] Copying: 561/1024 [MB] (37 MBps) [2024-11-26T23:59:27.594Z] Copying: 579/1024 [MB] (17 MBps) [2024-11-26T23:59:28.537Z] Copying: 598/1024 [MB] (19 MBps) [2024-11-26T23:59:29.479Z] Copying: 611/1024 [MB] (12 MBps) [2024-11-26T23:59:30.423Z] Copying: 624/1024 [MB] (13 MBps) [2024-11-26T23:59:31.363Z] Copying: 641/1024 [MB] (16 MBps) [2024-11-26T23:59:32.750Z] Copying: 657/1024 [MB] (16 MBps) [2024-11-26T23:59:33.693Z] Copying: 672/1024 [MB] (14 MBps) [2024-11-26T23:59:34.637Z] Copying: 688/1024 [MB] (15 MBps) [2024-11-26T23:59:35.582Z] Copying: 704/1024 [MB] (16 MBps) [2024-11-26T23:59:36.521Z] Copying: 718/1024 [MB] (13 MBps) [2024-11-26T23:59:37.455Z] Copying: 743/1024 [MB] (25 MBps) [2024-11-26T23:59:38.395Z] Copying: 781/1024 [MB] (37 MBps) [2024-11-26T23:59:39.794Z] Copying: 803/1024 [MB] (22 MBps) [2024-11-26T23:59:40.365Z] Copying: 817/1024 [MB] (13 MBps) [2024-11-26T23:59:41.750Z] Copying: 836/1024 [MB] (18 MBps) [2024-11-26T23:59:42.694Z] Copying: 852/1024 [MB] (16 MBps) [2024-11-26T23:59:43.671Z] Copying: 862/1024 [MB] (10 MBps) [2024-11-26T23:59:44.620Z] Copying: 879/1024 [MB] (16 MBps) [2024-11-26T23:59:45.564Z] Copying: 895/1024 [MB] (15 MBps) [2024-11-26T23:59:46.508Z] Copying: 911/1024 [MB] (16 MBps) [2024-11-26T23:59:47.452Z] Copying: 924/1024 [MB] (12 MBps) [2024-11-26T23:59:48.442Z] Copying: 940/1024 [MB] (15 MBps) [2024-11-26T23:59:49.384Z] Copying: 955/1024 [MB] (15 MBps) [2024-11-26T23:59:50.773Z] Copying: 971/1024 [MB] (15 MBps) [2024-11-26T23:59:51.713Z] Copying: 989/1024 [MB] (18 MBps) [2024-11-26T23:59:52.283Z] Copying: 1008/1024 [MB] (18 MBps) [2024-11-26T23:59:52.544Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:26:04.413 00:26:04.413 23:59:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:04.413 23:59:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:04.675 23:59:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:04.675 [2024-11-26 23:59:52.708785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.708840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:04.675 [2024-11-26 23:59:52.708856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:04.675 [2024-11-26 23:59:52.708865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.708895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:04.675 [2024-11-26 23:59:52.709434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.709453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:04.675 [2024-11-26 23:59:52.709462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:26:04.675 [2024-11-26 23:59:52.709471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.712199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.712234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:04.675 [2024-11-26 23:59:52.712244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:26:04.675 [2024-11-26 23:59:52.712253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.728590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.728625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:04.675 [2024-11-26 23:59:52.728638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.320 ms 00:26:04.675 [2024-11-26 23:59:52.728647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.734783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.734827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:04.675 [2024-11-26 23:59:52.734837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:26:04.675 [2024-11-26 23:59:52.734847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.737017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.737057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:04.675 [2024-11-26 23:59:52.737067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:26:04.675 [2024-11-26 23:59:52.737078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.742252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.742287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:04.675 [2024-11-26 23:59:52.742296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.142 ms 00:26:04.675 [2024-11-26 23:59:52.742306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.742423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.742435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:04.675 [2024-11-26 23:59:52.742444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:04.675 [2024-11-26 23:59:52.742460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.744403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.744538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:04.675 [2024-11-26 23:59:52.744554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:26:04.675 [2024-11-26 23:59:52.744562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.746166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.746198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:04.675 [2024-11-26 23:59:52.746207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:26:04.675 [2024-11-26 23:59:52.746216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.747390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.747424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:04.675 [2024-11-26 23:59:52.747433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:26:04.675 [2024-11-26 23:59:52.747442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.748636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.675 [2024-11-26 23:59:52.748739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:04.675 [2024-11-26 23:59:52.748807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:26:04.675 [2024-11-26 23:59:52.748835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.675 [2024-11-26 23:59:52.748876] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:04.675 [2024-11-26 23:59:52.748907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.748939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:04.675 [2024-11-26 23:59:52.749281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:04.676 [2024-11-26 23:59:52.749911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.749995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.750012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.750019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:04.677 [2024-11-26 23:59:52.750040] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:04.677 [2024-11-26 23:59:52.750049] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 549601c2-4d66-4189-bcb6-0dad7a61c53b 00:26:04.677 [2024-11-26 23:59:52.750060] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:04.677 [2024-11-26 23:59:52.750068] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:04.677 [2024-11-26 23:59:52.750077] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:04.677 [2024-11-26 23:59:52.750085] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:04.677 [2024-11-26 23:59:52.750094] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:04.677 [2024-11-26 23:59:52.750102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:04.677 [2024-11-26 23:59:52.750112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:04.677 [2024-11-26 23:59:52.750127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:04.677 [2024-11-26 23:59:52.750140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:04.677 [2024-11-26 23:59:52.750148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.677 [2024-11-26 23:59:52.750157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:04.677 [2024-11-26 23:59:52.750168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:26:04.677 [2024-11-26 23:59:52.750177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.752041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.677 [2024-11-26 23:59:52.752069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:04.677 [2024-11-26 23:59:52.752078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.845 ms 00:26:04.677 [2024-11-26 23:59:52.752089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.752185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.677 [2024-11-26 23:59:52.752199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:04.677 [2024-11-26 23:59:52.752208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:04.677 [2024-11-26 23:59:52.752218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.758783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.758842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:04.677 [2024-11-26 23:59:52.758851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.758861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.758913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.758926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:04.677 [2024-11-26 23:59:52.758936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.758945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.759009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.759023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:04.677 [2024-11-26 23:59:52.759031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.759040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.759056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.759066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:04.677 [2024-11-26 23:59:52.759076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.759085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.771274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.771317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:04.677 [2024-11-26 23:59:52.771328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.771338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.781414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.781461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:04.677 [2024-11-26 23:59:52.781472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.781482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.781555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.781571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:04.677 [2024-11-26 23:59:52.781580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.781595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.781686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.781699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:04.677 [2024-11-26 23:59:52.781707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.781718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.781785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.781816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:04.677 [2024-11-26 23:59:52.781824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.781834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.781867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.781879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:04.677 [2024-11-26 23:59:52.781886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.781896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.781942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.781956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:04.677 [2024-11-26 23:59:52.781964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.781974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.677 [2024-11-26 23:59:52.782033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:04.677 [2024-11-26 23:59:52.782046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:04.677 [2024-11-26 23:59:52.782055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:04.677 [2024-11-26 23:59:52.782069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.678 [2024-11-26 23:59:52.782216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.388 ms, result 0 00:26:04.678 true 00:26:04.939 23:59:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91142 00:26:04.939 23:59:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91142 00:26:04.939 23:59:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:04.939 [2024-11-26 23:59:52.873369] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:26:04.939 [2024-11-26 23:59:52.874231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91948 ] 00:26:04.939 [2024-11-26 23:59:53.022955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.939 [2024-11-26 23:59:53.047944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.324  [2024-11-26T23:59:55.396Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-26T23:59:56.332Z] Copying: 381/1024 [MB] (191 MBps) [2024-11-26T23:59:57.267Z] Copying: 639/1024 [MB] (257 MBps) [2024-11-26T23:59:57.836Z] Copying: 889/1024 [MB] (250 MBps) [2024-11-26T23:59:57.836Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:26:09.705 00:26:09.705 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91142 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:09.705 23:59:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:09.964 [2024-11-26 23:59:57.894453] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:26:09.964 [2024-11-26 23:59:57.894594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92006 ] 00:26:09.964 [2024-11-26 23:59:58.036439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.964 [2024-11-26 23:59:58.059965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.222 [2024-11-26 23:59:58.160018] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.222 [2024-11-26 23:59:58.160079] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.222 [2024-11-26 23:59:58.222399] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:10.222 [2024-11-26 23:59:58.222673] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:10.222 [2024-11-26 23:59:58.222886] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:10.482 [2024-11-26 23:59:58.399369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.482 [2024-11-26 23:59:58.399487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:10.482 [2024-11-26 23:59:58.399504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.482 [2024-11-26 23:59:58.399520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.482 [2024-11-26 23:59:58.399578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.482 [2024-11-26 23:59:58.399590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.482 [2024-11-26 23:59:58.399598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:10.482 [2024-11-26 23:59:58.399604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.482 [2024-11-26 23:59:58.399623] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:10.482 [2024-11-26 23:59:58.399825] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:10.482 [2024-11-26 23:59:58.399838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.482 [2024-11-26 23:59:58.399844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.482 [2024-11-26 23:59:58.399860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:26:10.482 [2024-11-26 23:59:58.399868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.482 [2024-11-26 23:59:58.401134] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:10.482 [2024-11-26 23:59:58.403522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.482 [2024-11-26 23:59:58.403556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:10.482 [2024-11-26 23:59:58.403565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.389 ms 00:26:10.482 [2024-11-26 23:59:58.403571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.482 [2024-11-26 23:59:58.403617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.482 [2024-11-26 23:59:58.403628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:10.482 [2024-11-26 23:59:58.403635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:10.482 [2024-11-26 23:59:58.403642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.482 [2024-11-26 23:59:58.409817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.482 [2024-11-26 23:59:58.409930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.482 [2024-11-26 23:59:58.409942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:26:10.483 [2024-11-26 23:59:58.409949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.483 [2024-11-26 23:59:58.410031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.483 [2024-11-26 23:59:58.410039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.483 [2024-11-26 23:59:58.410045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:10.483 [2024-11-26 23:59:58.410059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.483 [2024-11-26 23:59:58.410092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.483 [2024-11-26 23:59:58.410102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:10.483 [2024-11-26 23:59:58.410109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:10.483 [2024-11-26 23:59:58.410114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.483 [2024-11-26 23:59:58.410131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:10.483 [2024-11-26 23:59:58.411654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.483 [2024-11-26 23:59:58.411679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.483 [2024-11-26 23:59:58.411687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:26:10.483 [2024-11-26 23:59:58.411696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.483 [2024-11-26 23:59:58.411723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.483 [2024-11-26 23:59:58.411729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:10.483 [2024-11-26 23:59:58.411737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:10.483 [2024-11-26 23:59:58.411742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.483 [2024-11-26 23:59:58.411758] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:10.483 [2024-11-26 23:59:58.411774] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:10.483 [2024-11-26 23:59:58.411821] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:10.483 [2024-11-26 23:59:58.411841] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:10.483 [2024-11-26 23:59:58.411927] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:10.483 [2024-11-26 23:59:58.411935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:10.483 [2024-11-26 23:59:58.411943] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:10.483 [2024-11-26 23:59:58.411951] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:10.483 [2024-11-26 23:59:58.411958] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:10.483 [2024-11-26 23:59:58.411965] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:10.483 [2024-11-26 23:59:58.411971] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:10.483 [2024-11-26 23:59:58.411980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:10.483 [2024-11-26 23:59:58.411987] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:10.483 [2024-11-26 23:59:58.411993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.483 [2024-11-26 23:59:58.411998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:10.483 [2024-11-26 23:59:58.412007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:26:10.483 [2024-11-26 23:59:58.412012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.483 [2024-11-26 23:59:58.412075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.483 [2024-11-26 23:59:58.412084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:10.483 [2024-11-26 23:59:58.412089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:10.483 [2024-11-26 23:59:58.412095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.483 [2024-11-26 23:59:58.412175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:10.483 [2024-11-26 23:59:58.412185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:10.483 [2024-11-26 23:59:58.412192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:10.483 [2024-11-26 23:59:58.412208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:10.483 [2024-11-26 23:59:58.412224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.483 [2024-11-26 23:59:58.412233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:10.483 [2024-11-26 23:59:58.412238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:10.483 [2024-11-26 23:59:58.412243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.483 [2024-11-26 23:59:58.412252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:10.483 [2024-11-26 23:59:58.412257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:10.483 [2024-11-26 23:59:58.412262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:10.483 [2024-11-26 23:59:58.412272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:10.483 [2024-11-26 23:59:58.412289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:10.483 [2024-11-26 23:59:58.412306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:10.483 [2024-11-26 23:59:58.412323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:10.483 [2024-11-26 23:59:58.412346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:10.483 [2024-11-26 23:59:58.412363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.483 [2024-11-26 23:59:58.412375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:10.483 [2024-11-26 23:59:58.412381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:10.483 [2024-11-26 23:59:58.412387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.483 [2024-11-26 23:59:58.412393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:10.483 [2024-11-26 23:59:58.412399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:10.483 [2024-11-26 23:59:58.412404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:10.483 [2024-11-26 23:59:58.412416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:10.483 [2024-11-26 23:59:58.412422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412428] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:10.483 [2024-11-26 23:59:58.412434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:10.483 [2024-11-26 23:59:58.412442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.483 [2024-11-26 23:59:58.412455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:10.483 [2024-11-26 23:59:58.412461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:10.483 [2024-11-26 23:59:58.412467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:10.483 [2024-11-26 23:59:58.412473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:10.483 [2024-11-26 23:59:58.412481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:10.483 [2024-11-26 23:59:58.412487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:10.483 [2024-11-26 23:59:58.412495] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:10.483 [2024-11-26 23:59:58.412506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.483 [2024-11-26 23:59:58.412518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:10.483 [2024-11-26 23:59:58.412525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:10.483 [2024-11-26 23:59:58.412531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:10.483 [2024-11-26 23:59:58.412537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:10.483 [2024-11-26 23:59:58.412544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:10.483 [2024-11-26 23:59:58.412554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:10.483 [2024-11-26 23:59:58.412562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:10.483 [2024-11-26 23:59:58.412568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:10.484 [2024-11-26 23:59:58.412575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:10.484 [2024-11-26 23:59:58.412581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:10.484 [2024-11-26 23:59:58.412587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:10.484 [2024-11-26 23:59:58.412593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:10.484 [2024-11-26 23:59:58.412600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:10.484 [2024-11-26 23:59:58.412607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:10.484 [2024-11-26 23:59:58.412613] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:10.484 [2024-11-26 23:59:58.412622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.484 [2024-11-26 23:59:58.412629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:10.484 [2024-11-26 23:59:58.412636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:10.484 [2024-11-26 23:59:58.412642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:10.484 [2024-11-26 23:59:58.412648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:10.484 [2024-11-26 23:59:58.412655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.412666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:10.484 [2024-11-26 23:59:58.412676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:26:10.484 [2024-11-26 23:59:58.412682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.423646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.423674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.484 [2024-11-26 23:59:58.423685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.924 ms 00:26:10.484 [2024-11-26 23:59:58.423692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.423753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.423761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:10.484 [2024-11-26 23:59:58.423767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:10.484 [2024-11-26 23:59:58.423773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.442678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.442733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.484 [2024-11-26 23:59:58.442750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.851 ms 00:26:10.484 [2024-11-26 23:59:58.442762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.442839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.442856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.484 [2024-11-26 23:59:58.442869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.484 [2024-11-26 23:59:58.442880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.443384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.443413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.484 [2024-11-26 23:59:58.443433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:26:10.484 [2024-11-26 23:59:58.443446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.443642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.443668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.484 [2024-11-26 23:59:58.443680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:26:10.484 [2024-11-26 23:59:58.443690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.451133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.451174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.484 [2024-11-26 23:59:58.451188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.412 ms 00:26:10.484 [2024-11-26 23:59:58.451199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.454396] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:10.484 [2024-11-26 23:59:58.454444] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:10.484 [2024-11-26 23:59:58.454461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.454473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:10.484 [2024-11-26 23:59:58.454485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.149 ms 00:26:10.484 [2024-11-26 23:59:58.454496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.465817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.465850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:10.484 [2024-11-26 23:59:58.465863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.274 ms 00:26:10.484 [2024-11-26 23:59:58.465869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.467413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.467438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:10.484 [2024-11-26 23:59:58.467444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:26:10.484 [2024-11-26 23:59:58.467450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.468704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.468731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:10.484 [2024-11-26 23:59:58.468738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:26:10.484 [2024-11-26 23:59:58.468744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.469021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.469033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:10.484 [2024-11-26 23:59:58.469040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:26:10.484 [2024-11-26 23:59:58.469046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.485088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.485121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:10.484 [2024-11-26 23:59:58.485130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.029 ms 00:26:10.484 [2024-11-26 23:59:58.485137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.490998] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:10.484 [2024-11-26 23:59:58.493209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.493238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:10.484 [2024-11-26 23:59:58.493254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.039 ms 00:26:10.484 [2024-11-26 23:59:58.493261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.493309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.493320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:10.484 [2024-11-26 23:59:58.493331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:10.484 [2024-11-26 23:59:58.493338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.493426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.493434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:10.484 [2024-11-26 23:59:58.493440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:10.484 [2024-11-26 23:59:58.493446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.493466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.493473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:10.484 [2024-11-26 23:59:58.493480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:10.484 [2024-11-26 23:59:58.493488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.493517] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:10.484 [2024-11-26 23:59:58.493526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.493532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:10.484 [2024-11-26 23:59:58.493541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:10.484 [2024-11-26 23:59:58.493546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.496641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.496753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:10.484 [2024-11-26 23:59:58.496766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:26:10.484 [2024-11-26 23:59:58.496773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.496846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.484 [2024-11-26 23:59:58.496854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:10.484 [2024-11-26 23:59:58.496861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:10.484 [2024-11-26 23:59:58.496868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.484 [2024-11-26 23:59:58.497781] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.047 ms, result 0 00:26:11.420  [2024-11-27T00:00:00.940Z] Copying: 50/1024 [MB] (50 MBps) [2024-11-27T00:00:01.512Z] Copying: 62/1024 [MB] (12 MBps) [2024-11-27T00:00:02.893Z] Copying: 73/1024 [MB] (10 MBps) [2024-11-27T00:00:03.834Z] Copying: 111/1024 [MB] (38 MBps) [2024-11-27T00:00:04.777Z] Copying: 131/1024 [MB] (19 MBps) [2024-11-27T00:00:05.716Z] Copying: 159/1024 [MB] (28 MBps) [2024-11-27T00:00:06.657Z] Copying: 186/1024 [MB] (26 MBps) [2024-11-27T00:00:07.599Z] Copying: 214/1024 [MB] (27 MBps) [2024-11-27T00:00:08.537Z] Copying: 236/1024 [MB] (21 MBps) [2024-11-27T00:00:09.929Z] Copying: 258/1024 [MB] (22 MBps) [2024-11-27T00:00:10.863Z] Copying: 283/1024 [MB] (24 MBps) [2024-11-27T00:00:11.804Z] Copying: 312/1024 [MB] (29 MBps) [2024-11-27T00:00:12.746Z] Copying: 329/1024 [MB] (17 MBps) [2024-11-27T00:00:13.683Z] Copying: 346/1024 [MB] (16 MBps) [2024-11-27T00:00:14.625Z] Copying: 373/1024 [MB] (27 MBps) [2024-11-27T00:00:15.568Z] Copying: 392/1024 [MB] (19 MBps) [2024-11-27T00:00:16.514Z] Copying: 409/1024 [MB] (17 MBps) [2024-11-27T00:00:17.902Z] Copying: 420/1024 [MB] (10 MBps) [2024-11-27T00:00:18.847Z] Copying: 430/1024 [MB] (10 MBps) [2024-11-27T00:00:19.790Z] Copying: 446/1024 [MB] (16 MBps) [2024-11-27T00:00:20.732Z] Copying: 462/1024 [MB] (15 MBps) [2024-11-27T00:00:21.677Z] Copying: 476/1024 [MB] (14 MBps) [2024-11-27T00:00:22.623Z] Copying: 490/1024 [MB] (14 MBps) [2024-11-27T00:00:23.561Z] Copying: 509/1024 [MB] (18 MBps) [2024-11-27T00:00:24.951Z] Copying: 532/1024 [MB] (23 MBps) [2024-11-27T00:00:25.525Z] Copying: 555680/1048576 [kB] (10068 kBps) [2024-11-27T00:00:26.914Z] Copying: 565916/1048576 [kB] (10236 kBps) [2024-11-27T00:00:27.859Z] Copying: 576044/1048576 [kB] (10128 kBps) [2024-11-27T00:00:28.806Z] Copying: 577/1024 [MB] (15 MBps) [2024-11-27T00:00:29.750Z] Copying: 591/1024 [MB] (14 MBps) [2024-11-27T00:00:30.693Z] Copying: 605/1024 [MB] (13 MBps) [2024-11-27T00:00:31.636Z] Copying: 617/1024 [MB] (12 MBps) [2024-11-27T00:00:32.581Z] Copying: 633/1024 [MB] (15 MBps) [2024-11-27T00:00:33.525Z] Copying: 651/1024 [MB] (18 MBps) [2024-11-27T00:00:34.914Z] Copying: 663/1024 [MB] (12 MBps) [2024-11-27T00:00:35.860Z] Copying: 689944/1048576 [kB] (10124 kBps) [2024-11-27T00:00:36.807Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-27T00:00:37.753Z] Copying: 693/1024 [MB] (10 MBps) [2024-11-27T00:00:38.773Z] Copying: 720740/1048576 [kB] (10176 kBps) [2024-11-27T00:00:39.738Z] Copying: 723/1024 [MB] (19 MBps) [2024-11-27T00:00:40.684Z] Copying: 742/1024 [MB] (19 MBps) [2024-11-27T00:00:41.628Z] Copying: 762/1024 [MB] (19 MBps) [2024-11-27T00:00:42.575Z] Copying: 773/1024 [MB] (10 MBps) [2024-11-27T00:00:43.520Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-27T00:00:44.910Z] Copying: 798/1024 [MB] (13 MBps) [2024-11-27T00:00:45.856Z] Copying: 815/1024 [MB] (16 MBps) [2024-11-27T00:00:46.801Z] Copying: 833/1024 [MB] (17 MBps) [2024-11-27T00:00:47.748Z] Copying: 846/1024 [MB] (13 MBps) [2024-11-27T00:00:48.693Z] Copying: 864/1024 [MB] (18 MBps) [2024-11-27T00:00:49.637Z] Copying: 885/1024 [MB] (21 MBps) [2024-11-27T00:00:50.576Z] Copying: 900/1024 [MB] (15 MBps) [2024-11-27T00:00:51.515Z] Copying: 914/1024 [MB] (13 MBps) [2024-11-27T00:00:52.899Z] Copying: 941/1024 [MB] (27 MBps) [2024-11-27T00:00:53.844Z] Copying: 968/1024 [MB] (26 MBps) [2024-11-27T00:00:54.789Z] Copying: 980/1024 [MB] (12 MBps) [2024-11-27T00:00:55.733Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-27T00:00:56.676Z] Copying: 1004/1024 [MB] (13 MBps) [2024-11-27T00:00:57.251Z] Copying: 1023/1024 [MB] (18 MBps) [2024-11-27T00:00:57.251Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-27 00:00:56.988044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:56.988139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:09.120 [2024-11-27 00:00:56.988159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:09.120 [2024-11-27 00:00:56.988170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:56.988705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:09.120 [2024-11-27 00:00:56.990203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:56.990509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:09.120 [2024-11-27 00:00:56.990536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:27:09.120 [2024-11-27 00:00:56.990546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.005616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.005802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:09.120 [2024-11-27 00:00:57.005826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.287 ms 00:27:09.120 [2024-11-27 00:00:57.005836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.028660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.028874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:09.120 [2024-11-27 00:00:57.028897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.794 ms 00:27:09.120 [2024-11-27 00:00:57.028907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.035095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.035138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:09.120 [2024-11-27 00:00:57.035150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.148 ms 00:27:09.120 [2024-11-27 00:00:57.035159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.038234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.038285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:09.120 [2024-11-27 00:00:57.038297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:27:09.120 [2024-11-27 00:00:57.038305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.044597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.044766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:09.120 [2024-11-27 00:00:57.044787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.248 ms 00:27:09.120 [2024-11-27 00:00:57.044810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.203599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.203775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:09.120 [2024-11-27 00:00:57.203815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 158.722 ms 00:27:09.120 [2024-11-27 00:00:57.203826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.207593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.207756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:09.120 [2024-11-27 00:00:57.207773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.745 ms 00:27:09.120 [2024-11-27 00:00:57.207781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.211294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.211345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:09.120 [2024-11-27 00:00:57.211356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.437 ms 00:27:09.120 [2024-11-27 00:00:57.211363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.213705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.213752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:09.120 [2024-11-27 00:00:57.213763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:27:09.120 [2024-11-27 00:00:57.213771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.216072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.120 [2024-11-27 00:00:57.216119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:09.120 [2024-11-27 00:00:57.216129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:27:09.120 [2024-11-27 00:00:57.216136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-27 00:00:57.216177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:09.120 [2024-11-27 00:00:57.216206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 58112 / 261120 wr_cnt: 1 state: open 00:27:09.120 [2024-11-27 00:00:57.216218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:09.120 [2024-11-27 00:00:57.216548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.216999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.217006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.217015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.217023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.217031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:09.121 [2024-11-27 00:00:57.217048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:09.121 [2024-11-27 00:00:57.217061] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 549601c2-4d66-4189-bcb6-0dad7a61c53b 00:27:09.121 [2024-11-27 00:00:57.217074] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 58112 00:27:09.121 [2024-11-27 00:00:57.217083] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 59072 00:27:09.121 [2024-11-27 00:00:57.217092] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 58112 00:27:09.121 [2024-11-27 00:00:57.217101] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0165 00:27:09.121 [2024-11-27 00:00:57.217108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:09.121 [2024-11-27 00:00:57.217117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:09.121 [2024-11-27 00:00:57.217129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:09.121 [2024-11-27 00:00:57.217135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:09.121 [2024-11-27 00:00:57.217141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:09.121 [2024-11-27 00:00:57.217149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.121 [2024-11-27 00:00:57.217157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:09.121 [2024-11-27 00:00:57.217172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:27:09.121 [2024-11-27 00:00:57.217179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.121 [2024-11-27 00:00:57.220515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.121 [2024-11-27 00:00:57.220549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:09.121 [2024-11-27 00:00:57.220561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.317 ms 00:27:09.121 [2024-11-27 00:00:57.220571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.121 [2024-11-27 00:00:57.220728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.121 [2024-11-27 00:00:57.220739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:09.121 [2024-11-27 00:00:57.220748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:27:09.121 [2024-11-27 00:00:57.220757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.121 [2024-11-27 00:00:57.230919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.121 [2024-11-27 00:00:57.231080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.121 [2024-11-27 00:00:57.231138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.121 [2024-11-27 00:00:57.231162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.121 [2024-11-27 00:00:57.231243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.121 [2024-11-27 00:00:57.231265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.121 [2024-11-27 00:00:57.231286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.121 [2024-11-27 00:00:57.231306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.121 [2024-11-27 00:00:57.231388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.121 [2024-11-27 00:00:57.231507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.121 [2024-11-27 00:00:57.231529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.121 [2024-11-27 00:00:57.231548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.121 [2024-11-27 00:00:57.231577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.121 [2024-11-27 00:00:57.231603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.121 [2024-11-27 00:00:57.231623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.121 [2024-11-27 00:00:57.231690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.383 [2024-11-27 00:00:57.251163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.383 [2024-11-27 00:00:57.251378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.383 [2024-11-27 00:00:57.251439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.383 [2024-11-27 00:00:57.251464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.383 [2024-11-27 00:00:57.266698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.383 [2024-11-27 00:00:57.266933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.383 [2024-11-27 00:00:57.267001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.383 [2024-11-27 00:00:57.267029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.383 [2024-11-27 00:00:57.267118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.383 [2024-11-27 00:00:57.267143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:09.383 [2024-11-27 00:00:57.267165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.383 [2024-11-27 00:00:57.267186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.383 [2024-11-27 00:00:57.267237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.383 [2024-11-27 00:00:57.267260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:09.383 [2024-11-27 00:00:57.267289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.383 [2024-11-27 00:00:57.267344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.383 [2024-11-27 00:00:57.267457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.383 [2024-11-27 00:00:57.267488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:09.384 [2024-11-27 00:00:57.267608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.384 [2024-11-27 00:00:57.267633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.384 [2024-11-27 00:00:57.267701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.384 [2024-11-27 00:00:57.267777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:09.384 [2024-11-27 00:00:57.267833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.384 [2024-11-27 00:00:57.267849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.384 [2024-11-27 00:00:57.267908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.384 [2024-11-27 00:00:57.267926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:09.384 [2024-11-27 00:00:57.267936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.384 [2024-11-27 00:00:57.267955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.384 [2024-11-27 00:00:57.268013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.384 [2024-11-27 00:00:57.268024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:09.384 [2024-11-27 00:00:57.268038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.384 [2024-11-27 00:00:57.268047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.384 [2024-11-27 00:00:57.268215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.616 ms, result 0 00:27:09.954 00:27:09.954 00:27:09.954 00:00:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:12.503 00:01:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:12.504 [2024-11-27 00:01:00.286235] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:27:12.504 [2024-11-27 00:01:00.286386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92640 ] 00:27:12.504 [2024-11-27 00:01:00.431700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.504 [2024-11-27 00:01:00.471538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.504 [2024-11-27 00:01:00.621421] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:12.504 [2024-11-27 00:01:00.621847] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:12.786 [2024-11-27 00:01:00.785753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.787 [2024-11-27 00:01:00.785840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:12.787 [2024-11-27 00:01:00.785859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:12.787 [2024-11-27 00:01:00.785869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.787 [2024-11-27 00:01:00.785938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.787 [2024-11-27 00:01:00.785951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:12.787 [2024-11-27 00:01:00.785965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:12.787 [2024-11-27 00:01:00.785980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.787 [2024-11-27 00:01:00.786040] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:12.787 [2024-11-27 00:01:00.786840] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:12.787 [2024-11-27 00:01:00.787023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.787 [2024-11-27 00:01:00.787039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:12.787 [2024-11-27 00:01:00.787058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:27:12.787 [2024-11-27 00:01:00.787068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.787 [2024-11-27 00:01:00.789341] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:12.787 [2024-11-27 00:01:00.793901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.787 [2024-11-27 00:01:00.794096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:12.787 [2024-11-27 00:01:00.794116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.563 ms 00:27:12.787 [2024-11-27 00:01:00.794131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.788 [2024-11-27 00:01:00.794201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.788 [2024-11-27 00:01:00.794211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:12.788 [2024-11-27 00:01:00.794221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:12.788 [2024-11-27 00:01:00.794235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.788 [2024-11-27 00:01:00.805615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.788 [2024-11-27 00:01:00.805789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:12.788 [2024-11-27 00:01:00.805831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.325 ms 00:27:12.788 [2024-11-27 00:01:00.805843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.788 [2024-11-27 00:01:00.805960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.788 [2024-11-27 00:01:00.805971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:12.788 [2024-11-27 00:01:00.805980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:12.788 [2024-11-27 00:01:00.805993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.788 [2024-11-27 00:01:00.806064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.788 [2024-11-27 00:01:00.806076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:12.788 [2024-11-27 00:01:00.806085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:12.788 [2024-11-27 00:01:00.806096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.788 [2024-11-27 00:01:00.806129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:12.788 [2024-11-27 00:01:00.808779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.788 [2024-11-27 00:01:00.808831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:12.788 [2024-11-27 00:01:00.808842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:27:12.788 [2024-11-27 00:01:00.808850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.788 [2024-11-27 00:01:00.808892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.789 [2024-11-27 00:01:00.808901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:12.789 [2024-11-27 00:01:00.808915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:12.789 [2024-11-27 00:01:00.808925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.789 [2024-11-27 00:01:00.808949] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:12.789 [2024-11-27 00:01:00.808974] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:12.789 [2024-11-27 00:01:00.809020] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:12.789 [2024-11-27 00:01:00.809041] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:12.789 [2024-11-27 00:01:00.809153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:12.789 [2024-11-27 00:01:00.809165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:12.789 [2024-11-27 00:01:00.809179] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:12.789 [2024-11-27 00:01:00.809191] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:12.789 [2024-11-27 00:01:00.809204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:12.789 [2024-11-27 00:01:00.809214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:12.789 [2024-11-27 00:01:00.809222] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:12.789 [2024-11-27 00:01:00.809230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:12.789 [2024-11-27 00:01:00.809241] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:12.789 [2024-11-27 00:01:00.809254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.789 [2024-11-27 00:01:00.809266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:12.792 [2024-11-27 00:01:00.809278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:27:12.792 [2024-11-27 00:01:00.809286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.792 [2024-11-27 00:01:00.809371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.792 [2024-11-27 00:01:00.809380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:12.793 [2024-11-27 00:01:00.809387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:12.793 [2024-11-27 00:01:00.809395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.793 [2024-11-27 00:01:00.809503] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:12.793 [2024-11-27 00:01:00.809516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:12.793 [2024-11-27 00:01:00.809526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:12.793 [2024-11-27 00:01:00.809537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.793 [2024-11-27 00:01:00.809546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:12.793 [2024-11-27 00:01:00.809555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:12.793 [2024-11-27 00:01:00.809566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:12.793 [2024-11-27 00:01:00.809576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:12.793 [2024-11-27 00:01:00.809586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:12.793 [2024-11-27 00:01:00.809594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:12.793 [2024-11-27 00:01:00.809603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:12.793 [2024-11-27 00:01:00.809610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:12.793 [2024-11-27 00:01:00.809618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:12.793 [2024-11-27 00:01:00.809627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:12.793 [2024-11-27 00:01:00.809636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:12.793 [2024-11-27 00:01:00.809644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.793 [2024-11-27 00:01:00.809655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:12.793 [2024-11-27 00:01:00.809665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:12.794 [2024-11-27 00:01:00.809674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.794 [2024-11-27 00:01:00.809683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:12.794 [2024-11-27 00:01:00.809691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:12.794 [2024-11-27 00:01:00.809699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.794 [2024-11-27 00:01:00.809709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:12.794 [2024-11-27 00:01:00.809718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:12.794 [2024-11-27 00:01:00.809726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.794 [2024-11-27 00:01:00.809734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:12.794 [2024-11-27 00:01:00.809742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:12.794 [2024-11-27 00:01:00.809751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.794 [2024-11-27 00:01:00.809759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:12.794 [2024-11-27 00:01:00.809767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:12.794 [2024-11-27 00:01:00.809775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.794 [2024-11-27 00:01:00.809783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:12.794 [2024-11-27 00:01:00.809983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:12.794 [2024-11-27 00:01:00.810021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:12.794 [2024-11-27 00:01:00.810056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:12.794 [2024-11-27 00:01:00.810074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:12.794 [2024-11-27 00:01:00.810093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:12.794 [2024-11-27 00:01:00.810111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:12.794 [2024-11-27 00:01:00.810136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:12.794 [2024-11-27 00:01:00.810156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.794 [2024-11-27 00:01:00.810175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:12.794 [2024-11-27 00:01:00.810193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:12.794 [2024-11-27 00:01:00.810211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.794 [2024-11-27 00:01:00.810229] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:12.794 [2024-11-27 00:01:00.810252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:12.794 [2024-11-27 00:01:00.810271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:12.794 [2024-11-27 00:01:00.810347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.794 [2024-11-27 00:01:00.810372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:12.794 [2024-11-27 00:01:00.810395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:12.794 [2024-11-27 00:01:00.810415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:12.794 [2024-11-27 00:01:00.810434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:12.794 [2024-11-27 00:01:00.810453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:12.795 [2024-11-27 00:01:00.810472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:12.795 [2024-11-27 00:01:00.810493] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:12.795 [2024-11-27 00:01:00.810565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.795 [2024-11-27 00:01:00.810599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:12.795 [2024-11-27 00:01:00.810628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:12.795 [2024-11-27 00:01:00.810685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:12.795 [2024-11-27 00:01:00.810716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:12.795 [2024-11-27 00:01:00.810746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:12.795 [2024-11-27 00:01:00.810924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:12.795 [2024-11-27 00:01:00.810973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:12.795 [2024-11-27 00:01:00.811003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:12.795 [2024-11-27 00:01:00.811031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:12.795 [2024-11-27 00:01:00.811073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:12.795 [2024-11-27 00:01:00.811144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:12.795 [2024-11-27 00:01:00.811177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:12.795 [2024-11-27 00:01:00.811206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:12.795 [2024-11-27 00:01:00.811235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:12.795 [2024-11-27 00:01:00.811263] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:12.795 [2024-11-27 00:01:00.811301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.795 [2024-11-27 00:01:00.811362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:12.795 [2024-11-27 00:01:00.811373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:12.795 [2024-11-27 00:01:00.811381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:12.796 [2024-11-27 00:01:00.811389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:12.796 [2024-11-27 00:01:00.811400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.796 [2024-11-27 00:01:00.811409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:12.796 [2024-11-27 00:01:00.811419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.966 ms 00:27:12.796 [2024-11-27 00:01:00.811431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.796 [2024-11-27 00:01:00.831361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.796 [2024-11-27 00:01:00.831413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:12.796 [2024-11-27 00:01:00.831426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.848 ms 00:27:12.796 [2024-11-27 00:01:00.831435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.796 [2024-11-27 00:01:00.831531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.796 [2024-11-27 00:01:00.831541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:12.796 [2024-11-27 00:01:00.831550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:12.796 [2024-11-27 00:01:00.831558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.796 [2024-11-27 00:01:00.857037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.796 [2024-11-27 00:01:00.857106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:12.796 [2024-11-27 00:01:00.857125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.416 ms 00:27:12.796 [2024-11-27 00:01:00.857146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.796 [2024-11-27 00:01:00.857209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.796 [2024-11-27 00:01:00.857225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:12.796 [2024-11-27 00:01:00.857238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:12.796 [2024-11-27 00:01:00.857250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.796 [2024-11-27 00:01:00.858082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.796 [2024-11-27 00:01:00.858133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:12.796 [2024-11-27 00:01:00.858150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:27:12.796 [2024-11-27 00:01:00.858175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.796 [2024-11-27 00:01:00.858396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.796 [2024-11-27 00:01:00.858411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:12.797 [2024-11-27 00:01:00.858423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:27:12.797 [2024-11-27 00:01:00.858434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.797 [2024-11-27 00:01:00.869639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.797 [2024-11-27 00:01:00.869686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:12.797 [2024-11-27 00:01:00.869706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.176 ms 00:27:12.797 [2024-11-27 00:01:00.869715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.797 [2024-11-27 00:01:00.874301] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:12.797 [2024-11-27 00:01:00.874357] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:12.797 [2024-11-27 00:01:00.874375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.797 [2024-11-27 00:01:00.874384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:12.797 [2024-11-27 00:01:00.874393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:27:12.797 [2024-11-27 00:01:00.874401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.797 [2024-11-27 00:01:00.890305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.797 [2024-11-27 00:01:00.890490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:12.797 [2024-11-27 00:01:00.890511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.848 ms 00:27:12.797 [2024-11-27 00:01:00.890520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.797 [2024-11-27 00:01:00.893444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.797 [2024-11-27 00:01:00.893491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:12.797 [2024-11-27 00:01:00.893502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:27:12.797 [2024-11-27 00:01:00.893510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.797 [2024-11-27 00:01:00.896314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.797 [2024-11-27 00:01:00.896467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:12.797 [2024-11-27 00:01:00.896527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:27:12.797 [2024-11-27 00:01:00.896549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.797 [2024-11-27 00:01:00.896930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.797 [2024-11-27 00:01:00.896975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:12.798 [2024-11-27 00:01:00.897049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:27:12.798 [2024-11-27 00:01:00.897068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.062 [2024-11-27 00:01:00.934163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.062 [2024-11-27 00:01:00.934437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:13.062 [2024-11-27 00:01:00.934505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.058 ms 00:27:13.062 [2024-11-27 00:01:00.934544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.062 [2024-11-27 00:01:00.943030] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:13.062 [2024-11-27 00:01:00.946888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.062 [2024-11-27 00:01:00.947038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:13.062 [2024-11-27 00:01:00.947110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.285 ms 00:27:13.062 [2024-11-27 00:01:00.947135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.062 [2024-11-27 00:01:00.947249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.062 [2024-11-27 00:01:00.947279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:13.062 [2024-11-27 00:01:00.947301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:13.062 [2024-11-27 00:01:00.947332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.062 [2024-11-27 00:01:00.949134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.062 [2024-11-27 00:01:00.949292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:13.062 [2024-11-27 00:01:00.949312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:27:13.062 [2024-11-27 00:01:00.949321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.062 [2024-11-27 00:01:00.949375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.062 [2024-11-27 00:01:00.949385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:13.062 [2024-11-27 00:01:00.949395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:13.062 [2024-11-27 00:01:00.949404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.062 [2024-11-27 00:01:00.949450] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:13.062 [2024-11-27 00:01:00.949463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.062 [2024-11-27 00:01:00.949472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:13.063 [2024-11-27 00:01:00.949486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:13.063 [2024-11-27 00:01:00.949496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.063 [2024-11-27 00:01:00.956181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.063 [2024-11-27 00:01:00.956349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:13.063 [2024-11-27 00:01:00.956381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.663 ms 00:27:13.063 [2024-11-27 00:01:00.956391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.063 [2024-11-27 00:01:00.956479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.063 [2024-11-27 00:01:00.956491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:13.063 [2024-11-27 00:01:00.956501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:13.063 [2024-11-27 00:01:00.956514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.063 [2024-11-27 00:01:00.958080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 171.680 ms, result 0 00:27:14.447  [2024-11-27T00:01:03.150Z] Copying: 1540/1048576 [kB] (1540 kBps) [2024-11-27T00:01:04.536Z] Copying: 3196/1048576 [kB] (1656 kBps) [2024-11-27T00:01:05.481Z] Copying: 8028/1048576 [kB] (4832 kBps) [2024-11-27T00:01:06.425Z] Copying: 24/1024 [MB] (16 MBps) [2024-11-27T00:01:07.369Z] Copying: 39/1024 [MB] (15 MBps) [2024-11-27T00:01:08.314Z] Copying: 55/1024 [MB] (16 MBps) [2024-11-27T00:01:09.261Z] Copying: 71/1024 [MB] (16 MBps) [2024-11-27T00:01:10.205Z] Copying: 87/1024 [MB] (15 MBps) [2024-11-27T00:01:11.149Z] Copying: 103/1024 [MB] (15 MBps) [2024-11-27T00:01:12.541Z] Copying: 121/1024 [MB] (17 MBps) [2024-11-27T00:01:13.488Z] Copying: 146/1024 [MB] (24 MBps) [2024-11-27T00:01:14.153Z] Copying: 176/1024 [MB] (30 MBps) [2024-11-27T00:01:15.536Z] Copying: 198/1024 [MB] (22 MBps) [2024-11-27T00:01:16.479Z] Copying: 229/1024 [MB] (30 MBps) [2024-11-27T00:01:17.424Z] Copying: 273/1024 [MB] (43 MBps) [2024-11-27T00:01:18.370Z] Copying: 297/1024 [MB] (24 MBps) [2024-11-27T00:01:19.313Z] Copying: 320/1024 [MB] (22 MBps) [2024-11-27T00:01:20.258Z] Copying: 352/1024 [MB] (32 MBps) [2024-11-27T00:01:21.203Z] Copying: 373/1024 [MB] (20 MBps) [2024-11-27T00:01:22.150Z] Copying: 399/1024 [MB] (26 MBps) [2024-11-27T00:01:23.538Z] Copying: 426/1024 [MB] (27 MBps) [2024-11-27T00:01:24.483Z] Copying: 457/1024 [MB] (30 MBps) [2024-11-27T00:01:25.428Z] Copying: 481/1024 [MB] (23 MBps) [2024-11-27T00:01:26.373Z] Copying: 508/1024 [MB] (27 MBps) [2024-11-27T00:01:27.307Z] Copying: 538/1024 [MB] (29 MBps) [2024-11-27T00:01:28.249Z] Copying: 583/1024 [MB] (44 MBps) [2024-11-27T00:01:29.195Z] Copying: 618/1024 [MB] (34 MBps) [2024-11-27T00:01:30.579Z] Copying: 647/1024 [MB] (29 MBps) [2024-11-27T00:01:31.152Z] Copying: 683/1024 [MB] (36 MBps) [2024-11-27T00:01:32.534Z] Copying: 713/1024 [MB] (30 MBps) [2024-11-27T00:01:33.477Z] Copying: 742/1024 [MB] (29 MBps) [2024-11-27T00:01:34.413Z] Copying: 773/1024 [MB] (30 MBps) [2024-11-27T00:01:35.358Z] Copying: 809/1024 [MB] (36 MBps) [2024-11-27T00:01:36.303Z] Copying: 839/1024 [MB] (30 MBps) [2024-11-27T00:01:37.246Z] Copying: 867/1024 [MB] (28 MBps) [2024-11-27T00:01:38.188Z] Copying: 890/1024 [MB] (22 MBps) [2024-11-27T00:01:39.573Z] Copying: 924/1024 [MB] (34 MBps) [2024-11-27T00:01:40.516Z] Copying: 956/1024 [MB] (31 MBps) [2024-11-27T00:01:41.457Z] Copying: 977/1024 [MB] (20 MBps) [2024-11-27T00:01:42.435Z] Copying: 1004/1024 [MB] (27 MBps) [2024-11-27T00:01:42.435Z] Copying: 1020/1024 [MB] (15 MBps) [2024-11-27T00:01:43.007Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-27 00:01:42.834455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.876 [2024-11-27 00:01:42.834574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:54.877 [2024-11-27 00:01:42.834599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:54.877 [2024-11-27 00:01:42.834612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.834650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:54.877 [2024-11-27 00:01:42.835853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.835893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:54.877 [2024-11-27 00:01:42.835910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:27:54.877 [2024-11-27 00:01:42.835923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.836281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.836410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:54.877 [2024-11-27 00:01:42.836423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:27:54.877 [2024-11-27 00:01:42.836435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.853759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.853823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:54.877 [2024-11-27 00:01:42.853848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.300 ms 00:27:54.877 [2024-11-27 00:01:42.853857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.860164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.860200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:54.877 [2024-11-27 00:01:42.860226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.274 ms 00:27:54.877 [2024-11-27 00:01:42.860236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.863567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.863769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:54.877 [2024-11-27 00:01:42.863813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:27:54.877 [2024-11-27 00:01:42.863824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.868400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.868568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:54.877 [2024-11-27 00:01:42.868633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:27:54.877 [2024-11-27 00:01:42.868658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.873681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.873858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:54.877 [2024-11-27 00:01:42.873986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.968 ms 00:27:54.877 [2024-11-27 00:01:42.874018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.877096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.877255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:54.877 [2024-11-27 00:01:42.877316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:27:54.877 [2024-11-27 00:01:42.877340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.880328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.880492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:54.877 [2024-11-27 00:01:42.880547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:27:54.877 [2024-11-27 00:01:42.880569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.883152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.883301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:54.877 [2024-11-27 00:01:42.883356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:27:54.877 [2024-11-27 00:01:42.883378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.885596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.877 [2024-11-27 00:01:42.885746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:54.877 [2024-11-27 00:01:42.885829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:27:54.877 [2024-11-27 00:01:42.885854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.877 [2024-11-27 00:01:42.885931] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:54.877 [2024-11-27 00:01:42.885968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:54.877 [2024-11-27 00:01:42.886034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:54.877 [2024-11-27 00:01:42.886129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.886189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.886222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.886278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.886311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.886341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.886536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.887103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.887918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.888876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:54.877 [2024-11-27 00:01:42.889330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.889616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:54.878 [2024-11-27 00:01:42.890868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:54.878 [2024-11-27 00:01:42.890888] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 549601c2-4d66-4189-bcb6-0dad7a61c53b 00:27:54.878 [2024-11-27 00:01:42.890903] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:54.878 [2024-11-27 00:01:42.890912] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 206528 00:27:54.878 [2024-11-27 00:01:42.890920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 204544 00:27:54.878 [2024-11-27 00:01:42.890930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0097 00:27:54.878 [2024-11-27 00:01:42.890940] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:54.878 [2024-11-27 00:01:42.890949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:54.878 [2024-11-27 00:01:42.890958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:54.878 [2024-11-27 00:01:42.890965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:54.878 [2024-11-27 00:01:42.890973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:54.878 [2024-11-27 00:01:42.890984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.878 [2024-11-27 00:01:42.890999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:54.878 [2024-11-27 00:01:42.891009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.055 ms 00:27:54.878 [2024-11-27 00:01:42.891018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.878 [2024-11-27 00:01:42.894258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.878 [2024-11-27 00:01:42.894293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:54.878 [2024-11-27 00:01:42.894306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:27:54.878 [2024-11-27 00:01:42.894316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.878 [2024-11-27 00:01:42.894492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.878 [2024-11-27 00:01:42.894512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:54.878 [2024-11-27 00:01:42.894522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:27:54.878 [2024-11-27 00:01:42.894531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.878 [2024-11-27 00:01:42.904781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.878 [2024-11-27 00:01:42.904866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:54.878 [2024-11-27 00:01:42.904877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.878 [2024-11-27 00:01:42.904888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.878 [2024-11-27 00:01:42.904968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.878 [2024-11-27 00:01:42.904984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:54.878 [2024-11-27 00:01:42.905007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.878 [2024-11-27 00:01:42.905017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.878 [2024-11-27 00:01:42.905091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.878 [2024-11-27 00:01:42.905103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:54.878 [2024-11-27 00:01:42.905112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.878 [2024-11-27 00:01:42.905122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.878 [2024-11-27 00:01:42.905140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.878 [2024-11-27 00:01:42.905150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:54.878 [2024-11-27 00:01:42.905164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.878 [2024-11-27 00:01:42.905175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.878 [2024-11-27 00:01:42.925589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.878 [2024-11-27 00:01:42.925841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:54.878 [2024-11-27 00:01:42.925907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.878 [2024-11-27 00:01:42.925932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.878 [2024-11-27 00:01:42.941737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.878 [2024-11-27 00:01:42.941983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:54.878 [2024-11-27 00:01:42.942075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.879 [2024-11-27 00:01:42.942100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.879 [2024-11-27 00:01:42.942184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.879 [2024-11-27 00:01:42.942209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:54.879 [2024-11-27 00:01:42.942230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.879 [2024-11-27 00:01:42.942251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.879 [2024-11-27 00:01:42.942343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.879 [2024-11-27 00:01:42.942369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:54.879 [2024-11-27 00:01:42.942454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.879 [2024-11-27 00:01:42.942478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.879 [2024-11-27 00:01:42.942601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.879 [2024-11-27 00:01:42.942628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:54.879 [2024-11-27 00:01:42.942649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.879 [2024-11-27 00:01:42.942670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.879 [2024-11-27 00:01:42.942717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.879 [2024-11-27 00:01:42.942843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:54.879 [2024-11-27 00:01:42.942879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.879 [2024-11-27 00:01:42.942900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.879 [2024-11-27 00:01:42.942969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.879 [2024-11-27 00:01:42.942992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:54.879 [2024-11-27 00:01:42.943013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.879 [2024-11-27 00:01:42.943086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.879 [2024-11-27 00:01:42.943152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.879 [2024-11-27 00:01:42.943164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:54.879 [2024-11-27 00:01:42.943174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.879 [2024-11-27 00:01:42.943183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.879 [2024-11-27 00:01:42.943368] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 108.877 ms, result 0 00:27:55.139 00:27:55.139 00:27:55.139 00:01:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:57.748 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:57.748 00:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:57.748 [2024-11-27 00:01:45.635585] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:27:57.748 [2024-11-27 00:01:45.636005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93105 ] 00:27:57.748 [2024-11-27 00:01:45.782328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.748 [2024-11-27 00:01:45.824138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.010 [2024-11-27 00:01:45.977881] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:58.010 [2024-11-27 00:01:45.978232] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:58.273 [2024-11-27 00:01:46.141583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.141839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:58.273 [2024-11-27 00:01:46.141876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:58.273 [2024-11-27 00:01:46.141886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.141974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.141987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.273 [2024-11-27 00:01:46.141998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:58.273 [2024-11-27 00:01:46.142014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.142060] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:58.273 [2024-11-27 00:01:46.142494] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:58.273 [2024-11-27 00:01:46.142529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.142540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.273 [2024-11-27 00:01:46.142555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:27:58.273 [2024-11-27 00:01:46.142565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.144883] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:58.273 [2024-11-27 00:01:46.150035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.150114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:58.273 [2024-11-27 00:01:46.150126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.154 ms 00:27:58.273 [2024-11-27 00:01:46.150142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.150226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.150241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:58.273 [2024-11-27 00:01:46.150251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:58.273 [2024-11-27 00:01:46.150262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.162027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.162086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.273 [2024-11-27 00:01:46.162106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.716 ms 00:27:58.273 [2024-11-27 00:01:46.162127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.162230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.162240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.273 [2024-11-27 00:01:46.162252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:58.273 [2024-11-27 00:01:46.162261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.162327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.162339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:58.273 [2024-11-27 00:01:46.162348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:58.273 [2024-11-27 00:01:46.162359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.162385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:58.273 [2024-11-27 00:01:46.165116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.165158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.273 [2024-11-27 00:01:46.165170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:27:58.273 [2024-11-27 00:01:46.165179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.165218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.165227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:58.273 [2024-11-27 00:01:46.165240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:58.273 [2024-11-27 00:01:46.165251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.165276] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:58.273 [2024-11-27 00:01:46.165307] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:58.273 [2024-11-27 00:01:46.165356] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:58.273 [2024-11-27 00:01:46.165374] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:58.273 [2024-11-27 00:01:46.165488] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:58.273 [2024-11-27 00:01:46.165500] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:58.273 [2024-11-27 00:01:46.165515] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:58.273 [2024-11-27 00:01:46.165526] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:58.273 [2024-11-27 00:01:46.165535] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:58.273 [2024-11-27 00:01:46.165545] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:58.273 [2024-11-27 00:01:46.165552] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:58.273 [2024-11-27 00:01:46.165561] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:58.273 [2024-11-27 00:01:46.165573] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:58.273 [2024-11-27 00:01:46.165582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.165593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:58.273 [2024-11-27 00:01:46.165602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:27:58.273 [2024-11-27 00:01:46.165609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.165696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.273 [2024-11-27 00:01:46.165709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:58.273 [2024-11-27 00:01:46.165717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:58.273 [2024-11-27 00:01:46.165724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.273 [2024-11-27 00:01:46.165856] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:58.273 [2024-11-27 00:01:46.165870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:58.273 [2024-11-27 00:01:46.165880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:58.273 [2024-11-27 00:01:46.165895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.273 [2024-11-27 00:01:46.165905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:58.273 [2024-11-27 00:01:46.165914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:58.274 [2024-11-27 00:01:46.165922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:58.274 [2024-11-27 00:01:46.165932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:58.274 [2024-11-27 00:01:46.165942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:58.274 [2024-11-27 00:01:46.165953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:58.274 [2024-11-27 00:01:46.165963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:58.274 [2024-11-27 00:01:46.165972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:58.274 [2024-11-27 00:01:46.165988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:58.274 [2024-11-27 00:01:46.166001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:58.274 [2024-11-27 00:01:46.166011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:58.274 [2024-11-27 00:01:46.166020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:58.274 [2024-11-27 00:01:46.166036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:58.274 [2024-11-27 00:01:46.166043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:58.274 [2024-11-27 00:01:46.166075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.274 [2024-11-27 00:01:46.166092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:58.274 [2024-11-27 00:01:46.166100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.274 [2024-11-27 00:01:46.166122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:58.274 [2024-11-27 00:01:46.166131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.274 [2024-11-27 00:01:46.166148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:58.274 [2024-11-27 00:01:46.166156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.274 [2024-11-27 00:01:46.166172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:58.274 [2024-11-27 00:01:46.166180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:58.274 [2024-11-27 00:01:46.166197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:58.274 [2024-11-27 00:01:46.166204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:58.274 [2024-11-27 00:01:46.166211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:58.274 [2024-11-27 00:01:46.166218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:58.274 [2024-11-27 00:01:46.166225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:58.274 [2024-11-27 00:01:46.166232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:58.274 [2024-11-27 00:01:46.166249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:58.274 [2024-11-27 00:01:46.166256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166262] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:58.274 [2024-11-27 00:01:46.166273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:58.274 [2024-11-27 00:01:46.166284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:58.274 [2024-11-27 00:01:46.166292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.274 [2024-11-27 00:01:46.166301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:58.274 [2024-11-27 00:01:46.166309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:58.274 [2024-11-27 00:01:46.166315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:58.274 [2024-11-27 00:01:46.166322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:58.274 [2024-11-27 00:01:46.166329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:58.274 [2024-11-27 00:01:46.166336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:58.274 [2024-11-27 00:01:46.166345] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:58.274 [2024-11-27 00:01:46.166355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.274 [2024-11-27 00:01:46.166364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:58.274 [2024-11-27 00:01:46.166372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:58.274 [2024-11-27 00:01:46.166382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:58.274 [2024-11-27 00:01:46.166390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:58.274 [2024-11-27 00:01:46.166397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:58.274 [2024-11-27 00:01:46.166404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:58.274 [2024-11-27 00:01:46.166412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:58.274 [2024-11-27 00:01:46.166419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:58.274 [2024-11-27 00:01:46.166427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:58.274 [2024-11-27 00:01:46.166441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:58.274 [2024-11-27 00:01:46.166449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:58.274 [2024-11-27 00:01:46.166456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:58.274 [2024-11-27 00:01:46.166463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:58.274 [2024-11-27 00:01:46.166471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:58.274 [2024-11-27 00:01:46.166479] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:58.274 [2024-11-27 00:01:46.166488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.274 [2024-11-27 00:01:46.166497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:58.274 [2024-11-27 00:01:46.166504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:58.274 [2024-11-27 00:01:46.166515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:58.274 [2024-11-27 00:01:46.166522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:58.274 [2024-11-27 00:01:46.166531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.166539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:58.275 [2024-11-27 00:01:46.166553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:27:58.275 [2024-11-27 00:01:46.166565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.187077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.187130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.275 [2024-11-27 00:01:46.187143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.441 ms 00:27:58.275 [2024-11-27 00:01:46.187152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.187243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.187253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:58.275 [2024-11-27 00:01:46.187263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:58.275 [2024-11-27 00:01:46.187271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.213219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.213308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.275 [2024-11-27 00:01:46.213334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.882 ms 00:27:58.275 [2024-11-27 00:01:46.213349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.213420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.213437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.275 [2024-11-27 00:01:46.213458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:58.275 [2024-11-27 00:01:46.213472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.214359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.214402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.275 [2024-11-27 00:01:46.214420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:27:58.275 [2024-11-27 00:01:46.214435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.214669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.214695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.275 [2024-11-27 00:01:46.214709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:27:58.275 [2024-11-27 00:01:46.214721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.226245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.226447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.275 [2024-11-27 00:01:46.226467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.492 ms 00:27:58.275 [2024-11-27 00:01:46.226486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.231369] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:58.275 [2024-11-27 00:01:46.231428] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:58.275 [2024-11-27 00:01:46.231446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.231455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:58.275 [2024-11-27 00:01:46.231465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:27:58.275 [2024-11-27 00:01:46.231473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.247864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.247915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:58.275 [2024-11-27 00:01:46.247928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.333 ms 00:27:58.275 [2024-11-27 00:01:46.247937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.251013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.251062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:58.275 [2024-11-27 00:01:46.251072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:27:58.275 [2024-11-27 00:01:46.251080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.253764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.253968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:58.275 [2024-11-27 00:01:46.253999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:27:58.275 [2024-11-27 00:01:46.254007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.254364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.254380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:58.275 [2024-11-27 00:01:46.254390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:27:58.275 [2024-11-27 00:01:46.254401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.283096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.283172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:58.275 [2024-11-27 00:01:46.283187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.671 ms 00:27:58.275 [2024-11-27 00:01:46.283197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.291830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:58.275 [2024-11-27 00:01:46.295428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.295473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:58.275 [2024-11-27 00:01:46.295487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.156 ms 00:27:58.275 [2024-11-27 00:01:46.295496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.295598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.295611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:58.275 [2024-11-27 00:01:46.295631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:58.275 [2024-11-27 00:01:46.295645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.296728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.296784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:58.275 [2024-11-27 00:01:46.296817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:27:58.275 [2024-11-27 00:01:46.296826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.296872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.296884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:58.275 [2024-11-27 00:01:46.296897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:58.275 [2024-11-27 00:01:46.296909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.296958] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:58.275 [2024-11-27 00:01:46.296974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.296983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:58.275 [2024-11-27 00:01:46.296998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:58.275 [2024-11-27 00:01:46.297006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.303071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.303127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:58.275 [2024-11-27 00:01:46.303140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.044 ms 00:27:58.275 [2024-11-27 00:01:46.303149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.303259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.275 [2024-11-27 00:01:46.303270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:58.275 [2024-11-27 00:01:46.303281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:58.275 [2024-11-27 00:01:46.303297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.275 [2024-11-27 00:01:46.304853] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 162.672 ms, result 0 00:27:59.663  [2024-11-27T00:01:48.739Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T00:01:49.700Z] Copying: 27/1024 [MB] (15 MBps) [2024-11-27T00:01:50.645Z] Copying: 41/1024 [MB] (14 MBps) [2024-11-27T00:01:51.590Z] Copying: 57/1024 [MB] (15 MBps) [2024-11-27T00:01:52.535Z] Copying: 67/1024 [MB] (10 MBps) [2024-11-27T00:01:53.924Z] Copying: 80/1024 [MB] (12 MBps) [2024-11-27T00:01:54.496Z] Copying: 96/1024 [MB] (16 MBps) [2024-11-27T00:01:55.883Z] Copying: 106/1024 [MB] (10 MBps) [2024-11-27T00:01:56.828Z] Copying: 121/1024 [MB] (14 MBps) [2024-11-27T00:01:57.772Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-27T00:01:58.715Z] Copying: 144/1024 [MB] (12 MBps) [2024-11-27T00:01:59.659Z] Copying: 155/1024 [MB] (10 MBps) [2024-11-27T00:02:00.605Z] Copying: 166/1024 [MB] (11 MBps) [2024-11-27T00:02:01.550Z] Copying: 178/1024 [MB] (11 MBps) [2024-11-27T00:02:02.495Z] Copying: 195/1024 [MB] (16 MBps) [2024-11-27T00:02:03.880Z] Copying: 206/1024 [MB] (10 MBps) [2024-11-27T00:02:04.824Z] Copying: 237/1024 [MB] (30 MBps) [2024-11-27T00:02:05.769Z] Copying: 254/1024 [MB] (17 MBps) [2024-11-27T00:02:06.713Z] Copying: 268/1024 [MB] (13 MBps) [2024-11-27T00:02:07.657Z] Copying: 284/1024 [MB] (16 MBps) [2024-11-27T00:02:08.602Z] Copying: 295/1024 [MB] (10 MBps) [2024-11-27T00:02:09.547Z] Copying: 314/1024 [MB] (19 MBps) [2024-11-27T00:02:10.490Z] Copying: 326/1024 [MB] (11 MBps) [2024-11-27T00:02:11.870Z] Copying: 343/1024 [MB] (17 MBps) [2024-11-27T00:02:12.813Z] Copying: 357/1024 [MB] (13 MBps) [2024-11-27T00:02:13.757Z] Copying: 374/1024 [MB] (17 MBps) [2024-11-27T00:02:14.700Z] Copying: 395/1024 [MB] (20 MBps) [2024-11-27T00:02:15.643Z] Copying: 421/1024 [MB] (26 MBps) [2024-11-27T00:02:16.631Z] Copying: 436/1024 [MB] (14 MBps) [2024-11-27T00:02:17.598Z] Copying: 454/1024 [MB] (18 MBps) [2024-11-27T00:02:18.541Z] Copying: 472/1024 [MB] (17 MBps) [2024-11-27T00:02:19.488Z] Copying: 490/1024 [MB] (18 MBps) [2024-11-27T00:02:20.874Z] Copying: 504/1024 [MB] (13 MBps) [2024-11-27T00:02:21.820Z] Copying: 514/1024 [MB] (10 MBps) [2024-11-27T00:02:22.767Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-27T00:02:23.714Z] Copying: 536/1024 [MB] (11 MBps) [2024-11-27T00:02:24.660Z] Copying: 547/1024 [MB] (10 MBps) [2024-11-27T00:02:25.607Z] Copying: 558/1024 [MB] (10 MBps) [2024-11-27T00:02:26.553Z] Copying: 568/1024 [MB] (10 MBps) [2024-11-27T00:02:27.500Z] Copying: 579/1024 [MB] (10 MBps) [2024-11-27T00:02:28.890Z] Copying: 589/1024 [MB] (10 MBps) [2024-11-27T00:02:29.833Z] Copying: 600/1024 [MB] (10 MBps) [2024-11-27T00:02:30.779Z] Copying: 610/1024 [MB] (10 MBps) [2024-11-27T00:02:31.721Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-27T00:02:32.660Z] Copying: 632/1024 [MB] (11 MBps) [2024-11-27T00:02:33.602Z] Copying: 666/1024 [MB] (33 MBps) [2024-11-27T00:02:34.545Z] Copying: 685/1024 [MB] (19 MBps) [2024-11-27T00:02:35.490Z] Copying: 705/1024 [MB] (19 MBps) [2024-11-27T00:02:36.879Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-27T00:02:37.820Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-27T00:02:38.762Z] Copying: 745/1024 [MB] (18 MBps) [2024-11-27T00:02:39.708Z] Copying: 769/1024 [MB] (24 MBps) [2024-11-27T00:02:40.652Z] Copying: 782/1024 [MB] (12 MBps) [2024-11-27T00:02:41.599Z] Copying: 794/1024 [MB] (11 MBps) [2024-11-27T00:02:42.542Z] Copying: 813/1024 [MB] (19 MBps) [2024-11-27T00:02:43.487Z] Copying: 835/1024 [MB] (22 MBps) [2024-11-27T00:02:44.874Z] Copying: 856/1024 [MB] (20 MBps) [2024-11-27T00:02:45.819Z] Copying: 876/1024 [MB] (19 MBps) [2024-11-27T00:02:46.764Z] Copying: 900/1024 [MB] (24 MBps) [2024-11-27T00:02:47.738Z] Copying: 922/1024 [MB] (21 MBps) [2024-11-27T00:02:48.696Z] Copying: 940/1024 [MB] (17 MBps) [2024-11-27T00:02:49.638Z] Copying: 960/1024 [MB] (20 MBps) [2024-11-27T00:02:50.582Z] Copying: 981/1024 [MB] (21 MBps) [2024-11-27T00:02:51.528Z] Copying: 994/1024 [MB] (12 MBps) [2024-11-27T00:02:52.918Z] Copying: 1007/1024 [MB] (12 MBps) [2024-11-27T00:02:53.179Z] Copying: 1017/1024 [MB] (10 MBps) [2024-11-27T00:02:53.442Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 00:02:53.224248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.224385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:05.312 [2024-11-27 00:02:53.224416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:05.312 [2024-11-27 00:02:53.224428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.224463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:05.312 [2024-11-27 00:02:53.225522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.225569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:05.312 [2024-11-27 00:02:53.225586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:29:05.312 [2024-11-27 00:02:53.225600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.226262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.226305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:05.312 [2024-11-27 00:02:53.226319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:29:05.312 [2024-11-27 00:02:53.226340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.233235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.233286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:05.312 [2024-11-27 00:02:53.233303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:29:05.312 [2024-11-27 00:02:53.233316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.241617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.241663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:05.312 [2024-11-27 00:02:53.241676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.255 ms 00:29:05.312 [2024-11-27 00:02:53.241684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.244958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.245158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:05.312 [2024-11-27 00:02:53.245179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.188 ms 00:29:05.312 [2024-11-27 00:02:53.245188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.249970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.250028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:05.312 [2024-11-27 00:02:53.250040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.634 ms 00:29:05.312 [2024-11-27 00:02:53.250049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.254423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.254472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:05.312 [2024-11-27 00:02:53.254485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:29:05.312 [2024-11-27 00:02:53.254504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.257762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.257832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:05.312 [2024-11-27 00:02:53.257843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:29:05.312 [2024-11-27 00:02:53.257851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.260763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.260961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:05.312 [2024-11-27 00:02:53.260981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:29:05.312 [2024-11-27 00:02:53.260988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.263434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.263490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:05.312 [2024-11-27 00:02:53.263501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:29:05.312 [2024-11-27 00:02:53.263510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.265612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.312 [2024-11-27 00:02:53.265665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:05.312 [2024-11-27 00:02:53.265674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:29:05.312 [2024-11-27 00:02:53.265682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.312 [2024-11-27 00:02:53.265728] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:05.312 [2024-11-27 00:02:53.265745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:05.312 [2024-11-27 00:02:53.265757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:05.312 [2024-11-27 00:02:53.265766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.265998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:05.312 [2024-11-27 00:02:53.266466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:05.313 [2024-11-27 00:02:53.266627] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:05.313 [2024-11-27 00:02:53.266636] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 549601c2-4d66-4189-bcb6-0dad7a61c53b 00:29:05.313 [2024-11-27 00:02:53.266651] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:05.313 [2024-11-27 00:02:53.266659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:05.313 [2024-11-27 00:02:53.266668] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:05.313 [2024-11-27 00:02:53.266678] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:05.313 [2024-11-27 00:02:53.266685] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:05.313 [2024-11-27 00:02:53.266694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:05.313 [2024-11-27 00:02:53.266706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:05.313 [2024-11-27 00:02:53.266713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:05.313 [2024-11-27 00:02:53.266720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:05.313 [2024-11-27 00:02:53.266729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.313 [2024-11-27 00:02:53.266751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:05.313 [2024-11-27 00:02:53.266761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:29:05.313 [2024-11-27 00:02:53.266771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.270014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.313 [2024-11-27 00:02:53.270046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:05.313 [2024-11-27 00:02:53.270058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:29:05.313 [2024-11-27 00:02:53.270093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.270263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.313 [2024-11-27 00:02:53.270272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:05.313 [2024-11-27 00:02:53.270282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:29:05.313 [2024-11-27 00:02:53.270290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.280766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.280999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.313 [2024-11-27 00:02:53.281019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.281036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.281101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.281111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.313 [2024-11-27 00:02:53.281120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.281128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.281201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.281213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.313 [2024-11-27 00:02:53.281222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.281231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.281253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.281262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.313 [2024-11-27 00:02:53.281270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.281279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.300413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.300622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:05.313 [2024-11-27 00:02:53.300644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.300663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.315091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.315275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:05.313 [2024-11-27 00:02:53.315295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.315304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.315380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.315392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:05.313 [2024-11-27 00:02:53.315402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.315410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.315464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.315474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:05.313 [2024-11-27 00:02:53.315484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.315493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.315577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.315588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:05.313 [2024-11-27 00:02:53.315598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.315606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.315652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.315667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:05.313 [2024-11-27 00:02:53.315681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.315690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.315740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.315750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:05.313 [2024-11-27 00:02:53.315759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.315767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.315858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.313 [2024-11-27 00:02:53.315872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:05.313 [2024-11-27 00:02:53.315882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.313 [2024-11-27 00:02:53.315902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.313 [2024-11-27 00:02:53.316066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 91.782 ms, result 0 00:29:05.574 00:29:05.574 00:29:05.574 00:02:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:08.124 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:08.124 00:02:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:08.124 00:02:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:08.124 00:02:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:08.124 00:02:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:08.124 00:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:08.124 00:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:08.124 00:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:08.124 Process with pid 91142 is not found 00:29:08.124 00:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91142 00:29:08.124 00:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91142 ']' 00:29:08.124 00:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91142 00:29:08.124 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91142) - No such process 00:29:08.124 00:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91142 is not found' 00:29:08.124 00:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:08.384 Remove shared memory files 00:29:08.384 00:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:08.384 00:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:08.384 00:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:08.384 00:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:08.384 00:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:08.384 00:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:08.384 00:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:08.384 ************************************ 00:29:08.384 END TEST ftl_dirty_shutdown 00:29:08.385 ************************************ 00:29:08.385 00:29:08.385 real 4m15.577s 00:29:08.385 user 4m45.886s 00:29:08.385 sys 0m28.827s 00:29:08.385 00:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.385 00:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:08.385 00:02:56 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:08.385 00:02:56 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:08.385 00:02:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.385 00:02:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:08.647 ************************************ 00:29:08.647 START TEST ftl_upgrade_shutdown 00:29:08.647 ************************************ 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:08.647 * Looking for test storage... 00:29:08.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:08.647 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:08.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.648 --rc genhtml_branch_coverage=1 00:29:08.648 --rc genhtml_function_coverage=1 00:29:08.648 --rc genhtml_legend=1 00:29:08.648 --rc geninfo_all_blocks=1 00:29:08.648 --rc geninfo_unexecuted_blocks=1 00:29:08.648 00:29:08.648 ' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:08.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.648 --rc genhtml_branch_coverage=1 00:29:08.648 --rc genhtml_function_coverage=1 00:29:08.648 --rc genhtml_legend=1 00:29:08.648 --rc geninfo_all_blocks=1 00:29:08.648 --rc geninfo_unexecuted_blocks=1 00:29:08.648 00:29:08.648 ' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:08.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.648 --rc genhtml_branch_coverage=1 00:29:08.648 --rc genhtml_function_coverage=1 00:29:08.648 --rc genhtml_legend=1 00:29:08.648 --rc geninfo_all_blocks=1 00:29:08.648 --rc geninfo_unexecuted_blocks=1 00:29:08.648 00:29:08.648 ' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:08.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.648 --rc genhtml_branch_coverage=1 00:29:08.648 --rc genhtml_function_coverage=1 00:29:08.648 --rc genhtml_legend=1 00:29:08.648 --rc geninfo_all_blocks=1 00:29:08.648 --rc geninfo_unexecuted_blocks=1 00:29:08.648 00:29:08.648 ' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93894 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93894 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93894 ']' 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.648 00:02:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:08.909 [2024-11-27 00:02:56.804743] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:29:08.909 [2024-11-27 00:02:56.805176] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93894 ] 00:29:08.909 [2024-11-27 00:02:56.951058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.909 [2024-11-27 00:02:56.992399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:09.853 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:09.854 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:09.854 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:09.854 00:02:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:09.854 00:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:09.854 00:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:09.854 00:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:09.854 00:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:09.854 00:02:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:10.115 00:02:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:10.115 { 00:29:10.115 "name": "basen1", 00:29:10.115 "aliases": [ 00:29:10.115 "801f6333-792b-4304-b87b-21e453b8f982" 00:29:10.115 ], 00:29:10.115 "product_name": "NVMe disk", 00:29:10.115 "block_size": 4096, 00:29:10.115 "num_blocks": 1310720, 00:29:10.115 "uuid": "801f6333-792b-4304-b87b-21e453b8f982", 00:29:10.115 "numa_id": -1, 00:29:10.115 "assigned_rate_limits": { 00:29:10.115 "rw_ios_per_sec": 0, 00:29:10.115 "rw_mbytes_per_sec": 0, 00:29:10.115 "r_mbytes_per_sec": 0, 00:29:10.115 "w_mbytes_per_sec": 0 00:29:10.115 }, 00:29:10.115 "claimed": true, 00:29:10.115 "claim_type": "read_many_write_one", 00:29:10.115 "zoned": false, 00:29:10.115 "supported_io_types": { 00:29:10.115 "read": true, 00:29:10.115 "write": true, 00:29:10.115 "unmap": true, 00:29:10.115 "flush": true, 00:29:10.115 "reset": true, 00:29:10.115 "nvme_admin": true, 00:29:10.115 "nvme_io": true, 00:29:10.115 "nvme_io_md": false, 00:29:10.115 "write_zeroes": true, 00:29:10.115 "zcopy": false, 00:29:10.115 "get_zone_info": false, 00:29:10.115 "zone_management": false, 00:29:10.115 "zone_append": false, 00:29:10.115 "compare": true, 00:29:10.115 "compare_and_write": false, 00:29:10.115 "abort": true, 00:29:10.115 "seek_hole": false, 00:29:10.115 "seek_data": false, 00:29:10.115 "copy": true, 00:29:10.115 "nvme_iov_md": false 00:29:10.115 }, 00:29:10.115 "driver_specific": { 00:29:10.115 "nvme": [ 00:29:10.115 { 00:29:10.115 "pci_address": "0000:00:11.0", 00:29:10.116 "trid": { 00:29:10.116 "trtype": "PCIe", 00:29:10.116 "traddr": "0000:00:11.0" 00:29:10.116 }, 00:29:10.116 "ctrlr_data": { 00:29:10.116 "cntlid": 0, 00:29:10.116 "vendor_id": "0x1b36", 00:29:10.116 "model_number": "QEMU NVMe Ctrl", 00:29:10.116 "serial_number": "12341", 00:29:10.116 "firmware_revision": "8.0.0", 00:29:10.116 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:10.116 "oacs": { 00:29:10.116 "security": 0, 00:29:10.116 "format": 1, 00:29:10.116 "firmware": 0, 00:29:10.116 "ns_manage": 1 00:29:10.116 }, 00:29:10.116 "multi_ctrlr": false, 00:29:10.116 "ana_reporting": false 00:29:10.116 }, 00:29:10.116 "vs": { 00:29:10.116 "nvme_version": "1.4" 00:29:10.116 }, 00:29:10.116 "ns_data": { 00:29:10.116 "id": 1, 00:29:10.116 "can_share": false 00:29:10.116 } 00:29:10.116 } 00:29:10.116 ], 00:29:10.116 "mp_policy": "active_passive" 00:29:10.116 } 00:29:10.116 } 00:29:10.116 ]' 00:29:10.116 00:02:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:10.116 00:02:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:10.116 00:02:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=8ce5df7d-3ff1-4c74-8280-631546b83492 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:10.378 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ce5df7d-3ff1-4c74-8280-631546b83492 00:29:10.639 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:10.900 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=c47206db-c690-45c9-b7a4-3d7848b3d35c 00:29:10.900 00:02:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c47206db-c690-45c9-b7a4-3d7848b3d35c 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=4712772b-b696-4322-a44b-0eb625536dd9 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 4712772b-b696-4322-a44b-0eb625536dd9 ]] 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4712772b-b696-4322-a44b-0eb625536dd9 5120 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=4712772b-b696-4322-a44b-0eb625536dd9 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4712772b-b696-4322-a44b-0eb625536dd9 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4712772b-b696-4322-a44b-0eb625536dd9 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:11.160 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4712772b-b696-4322-a44b-0eb625536dd9 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:11.420 { 00:29:11.420 "name": "4712772b-b696-4322-a44b-0eb625536dd9", 00:29:11.420 "aliases": [ 00:29:11.420 "lvs/basen1p0" 00:29:11.420 ], 00:29:11.420 "product_name": "Logical Volume", 00:29:11.420 "block_size": 4096, 00:29:11.420 "num_blocks": 5242880, 00:29:11.420 "uuid": "4712772b-b696-4322-a44b-0eb625536dd9", 00:29:11.420 "assigned_rate_limits": { 00:29:11.420 "rw_ios_per_sec": 0, 00:29:11.420 "rw_mbytes_per_sec": 0, 00:29:11.420 "r_mbytes_per_sec": 0, 00:29:11.420 "w_mbytes_per_sec": 0 00:29:11.420 }, 00:29:11.420 "claimed": false, 00:29:11.420 "zoned": false, 00:29:11.420 "supported_io_types": { 00:29:11.420 "read": true, 00:29:11.420 "write": true, 00:29:11.420 "unmap": true, 00:29:11.420 "flush": false, 00:29:11.420 "reset": true, 00:29:11.420 "nvme_admin": false, 00:29:11.420 "nvme_io": false, 00:29:11.420 "nvme_io_md": false, 00:29:11.420 "write_zeroes": true, 00:29:11.420 "zcopy": false, 00:29:11.420 "get_zone_info": false, 00:29:11.420 "zone_management": false, 00:29:11.420 "zone_append": false, 00:29:11.420 "compare": false, 00:29:11.420 "compare_and_write": false, 00:29:11.420 "abort": false, 00:29:11.420 "seek_hole": true, 00:29:11.420 "seek_data": true, 00:29:11.420 "copy": false, 00:29:11.420 "nvme_iov_md": false 00:29:11.420 }, 00:29:11.420 "driver_specific": { 00:29:11.420 "lvol": { 00:29:11.420 "lvol_store_uuid": "c47206db-c690-45c9-b7a4-3d7848b3d35c", 00:29:11.420 "base_bdev": "basen1", 00:29:11.420 "thin_provision": true, 00:29:11.420 "num_allocated_clusters": 0, 00:29:11.420 "snapshot": false, 00:29:11.420 "clone": false, 00:29:11.420 "esnap_clone": false 00:29:11.420 } 00:29:11.420 } 00:29:11.420 } 00:29:11.420 ]' 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:11.420 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:11.680 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:11.680 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:11.680 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:11.939 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:11.939 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:11.939 00:02:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4712772b-b696-4322-a44b-0eb625536dd9 -c cachen1p0 --l2p_dram_limit 2 00:29:12.200 [2024-11-27 00:03:00.156479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.156629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:12.200 [2024-11-27 00:03:00.156648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:12.200 [2024-11-27 00:03:00.156661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.156726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.156745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:12.200 [2024-11-27 00:03:00.156752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:29:12.200 [2024-11-27 00:03:00.156762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.156779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:12.200 [2024-11-27 00:03:00.157404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:12.200 [2024-11-27 00:03:00.157486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.157541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:12.200 [2024-11-27 00:03:00.157571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.709 ms 00:29:12.200 [2024-11-27 00:03:00.157598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.157870] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9424c1e2-1ab0-4a4e-bb8d-8be1dadffe97 00:29:12.200 [2024-11-27 00:03:00.160091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.160179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:12.200 [2024-11-27 00:03:00.160214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:29:12.200 [2024-11-27 00:03:00.160236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.170196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.170269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:12.200 [2024-11-27 00:03:00.170303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.749 ms 00:29:12.200 [2024-11-27 00:03:00.170335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.170431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.170440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:12.200 [2024-11-27 00:03:00.170454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:12.200 [2024-11-27 00:03:00.170465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.170522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.170533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:12.200 [2024-11-27 00:03:00.170546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:12.200 [2024-11-27 00:03:00.170553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.170578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:12.200 [2024-11-27 00:03:00.172435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.172465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:12.200 [2024-11-27 00:03:00.172474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.866 ms 00:29:12.200 [2024-11-27 00:03:00.172487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.172514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.172525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:12.200 [2024-11-27 00:03:00.172534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:12.200 [2024-11-27 00:03:00.172545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.172562] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:12.200 [2024-11-27 00:03:00.172710] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:12.200 [2024-11-27 00:03:00.172722] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:12.200 [2024-11-27 00:03:00.172735] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:12.200 [2024-11-27 00:03:00.172745] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:12.200 [2024-11-27 00:03:00.172763] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:12.200 [2024-11-27 00:03:00.172772] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:12.200 [2024-11-27 00:03:00.172784] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:12.200 [2024-11-27 00:03:00.172807] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:12.200 [2024-11-27 00:03:00.172817] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:12.200 [2024-11-27 00:03:00.172825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.172834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:12.200 [2024-11-27 00:03:00.172842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:29:12.200 [2024-11-27 00:03:00.172851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.172934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.200 [2024-11-27 00:03:00.172946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:12.200 [2024-11-27 00:03:00.172954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:29:12.200 [2024-11-27 00:03:00.172964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.200 [2024-11-27 00:03:00.173058] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:12.200 [2024-11-27 00:03:00.173069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:12.200 [2024-11-27 00:03:00.173077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:12.200 [2024-11-27 00:03:00.173087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.200 [2024-11-27 00:03:00.173098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:12.200 [2024-11-27 00:03:00.173108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:12.200 [2024-11-27 00:03:00.173116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:12.200 [2024-11-27 00:03:00.173126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:12.200 [2024-11-27 00:03:00.173133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:12.200 [2024-11-27 00:03:00.173145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.200 [2024-11-27 00:03:00.173152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:12.200 [2024-11-27 00:03:00.173162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:12.200 [2024-11-27 00:03:00.173171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.200 [2024-11-27 00:03:00.173183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:12.200 [2024-11-27 00:03:00.173191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:12.200 [2024-11-27 00:03:00.173200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.200 [2024-11-27 00:03:00.173207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:12.200 [2024-11-27 00:03:00.173216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:12.200 [2024-11-27 00:03:00.173224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.200 [2024-11-27 00:03:00.173235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:12.200 [2024-11-27 00:03:00.173243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:12.200 [2024-11-27 00:03:00.173253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:12.200 [2024-11-27 00:03:00.173260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:12.200 [2024-11-27 00:03:00.173270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:12.200 [2024-11-27 00:03:00.173277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:12.200 [2024-11-27 00:03:00.173286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:12.200 [2024-11-27 00:03:00.173294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:12.201 [2024-11-27 00:03:00.173303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:12.201 [2024-11-27 00:03:00.173311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:12.201 [2024-11-27 00:03:00.173322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:12.201 [2024-11-27 00:03:00.173329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:12.201 [2024-11-27 00:03:00.173339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:12.201 [2024-11-27 00:03:00.173347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:12.201 [2024-11-27 00:03:00.173356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.201 [2024-11-27 00:03:00.173364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:12.201 [2024-11-27 00:03:00.173375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:12.201 [2024-11-27 00:03:00.173383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.201 [2024-11-27 00:03:00.173392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:12.201 [2024-11-27 00:03:00.173400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:12.201 [2024-11-27 00:03:00.173409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.201 [2024-11-27 00:03:00.173417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:12.201 [2024-11-27 00:03:00.173426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:12.201 [2024-11-27 00:03:00.173434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.201 [2024-11-27 00:03:00.173443] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:12.201 [2024-11-27 00:03:00.173451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:12.201 [2024-11-27 00:03:00.173462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:12.201 [2024-11-27 00:03:00.173469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:12.201 [2024-11-27 00:03:00.173481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:12.201 [2024-11-27 00:03:00.173494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:12.201 [2024-11-27 00:03:00.173502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:12.201 [2024-11-27 00:03:00.173508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:12.201 [2024-11-27 00:03:00.173519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:12.201 [2024-11-27 00:03:00.173526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:12.201 [2024-11-27 00:03:00.173539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:12.201 [2024-11-27 00:03:00.173551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:12.201 [2024-11-27 00:03:00.173573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:12.201 [2024-11-27 00:03:00.173598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:12.201 [2024-11-27 00:03:00.173605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:12.201 [2024-11-27 00:03:00.173616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:12.201 [2024-11-27 00:03:00.173622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:12.201 [2024-11-27 00:03:00.173680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:12.201 [2024-11-27 00:03:00.173688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:12.201 [2024-11-27 00:03:00.173704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:12.201 [2024-11-27 00:03:00.173714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:12.201 [2024-11-27 00:03:00.173722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:12.201 [2024-11-27 00:03:00.173731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.201 [2024-11-27 00:03:00.173739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:12.201 [2024-11-27 00:03:00.173750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.738 ms 00:29:12.201 [2024-11-27 00:03:00.173757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.201 [2024-11-27 00:03:00.173808] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:12.201 [2024-11-27 00:03:00.173818] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:17.485 [2024-11-27 00:03:04.850418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.850828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:17.485 [2024-11-27 00:03:04.850929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4676.581 ms 00:29:17.485 [2024-11-27 00:03:04.850959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.869880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.870111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:17.485 [2024-11-27 00:03:04.870223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.736 ms 00:29:17.485 [2024-11-27 00:03:04.870259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.870372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.870424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:17.485 [2024-11-27 00:03:04.870454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:17.485 [2024-11-27 00:03:04.870475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.887887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.888085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:17.485 [2024-11-27 00:03:04.888336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.343 ms 00:29:17.485 [2024-11-27 00:03:04.888520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.888587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.888646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:17.485 [2024-11-27 00:03:04.888676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:17.485 [2024-11-27 00:03:04.888743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.889503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.889655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:17.485 [2024-11-27 00:03:04.889734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.656 ms 00:29:17.485 [2024-11-27 00:03:04.889762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.889870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.889938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:17.485 [2024-11-27 00:03:04.889976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:17.485 [2024-11-27 00:03:04.889997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.901898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.902059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:17.485 [2024-11-27 00:03:04.902132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.830 ms 00:29:17.485 [2024-11-27 00:03:04.902156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.933838] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:17.485 [2024-11-27 00:03:04.935637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.935807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:17.485 [2024-11-27 00:03:04.935878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.359 ms 00:29:17.485 [2024-11-27 00:03:04.935905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.966011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.966231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:17.485 [2024-11-27 00:03:04.966253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.045 ms 00:29:17.485 [2024-11-27 00:03:04.966269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.966381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.966396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:17.485 [2024-11-27 00:03:04.966412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:29:17.485 [2024-11-27 00:03:04.966424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.972029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.972088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:17.485 [2024-11-27 00:03:04.972104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.562 ms 00:29:17.485 [2024-11-27 00:03:04.972116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.978014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.978071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:17.485 [2024-11-27 00:03:04.978106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.847 ms 00:29:17.485 [2024-11-27 00:03:04.978118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:04.978466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:04.978480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:17.485 [2024-11-27 00:03:04.978490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:29:17.485 [2024-11-27 00:03:04.978504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:05.054924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:05.054991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:17.485 [2024-11-27 00:03:05.055008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.396 ms 00:29:17.485 [2024-11-27 00:03:05.055020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:05.063329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:05.063391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:17.485 [2024-11-27 00:03:05.063403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.245 ms 00:29:17.485 [2024-11-27 00:03:05.063416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:05.069514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:05.069692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:17.485 [2024-11-27 00:03:05.069711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.045 ms 00:29:17.485 [2024-11-27 00:03:05.069722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:05.076054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:05.076221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:17.485 [2024-11-27 00:03:05.076239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.290 ms 00:29:17.485 [2024-11-27 00:03:05.076254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:05.076301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:05.076315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:17.485 [2024-11-27 00:03:05.076325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:17.485 [2024-11-27 00:03:05.076336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:05.076439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.485 [2024-11-27 00:03:05.076453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:17.485 [2024-11-27 00:03:05.076462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:29:17.485 [2024-11-27 00:03:05.076478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.485 [2024-11-27 00:03:05.077893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4920.773 ms, result 0 00:29:17.485 { 00:29:17.485 "name": "ftl", 00:29:17.485 "uuid": "9424c1e2-1ab0-4a4e-bb8d-8be1dadffe97" 00:29:17.485 } 00:29:17.485 00:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:17.485 [2024-11-27 00:03:05.312806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.485 00:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:17.485 00:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:17.747 [2024-11-27 00:03:05.741282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:17.747 00:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:18.008 [2024-11-27 00:03:05.949742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:18.008 00:03:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:18.269 Fill FTL, iteration 1 00:29:18.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94037 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94037 /var/tmp/spdk.tgt.sock 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94037 ']' 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:18.269 00:03:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:18.528 [2024-11-27 00:03:06.410964] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:29:18.529 [2024-11-27 00:03:06.411393] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94037 ] 00:29:18.529 [2024-11-27 00:03:06.558378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.529 [2024-11-27 00:03:06.599776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.467 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.467 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:19.467 00:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:19.467 ftln1 00:29:19.467 00:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:19.467 00:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94037 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94037 ']' 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94037 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94037 00:29:19.726 killing process with pid 94037 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94037' 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94037 00:29:19.726 00:03:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94037 00:29:20.297 00:03:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:20.297 00:03:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:20.297 [2024-11-27 00:03:08.267688] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:29:20.297 [2024-11-27 00:03:08.267829] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94073 ] 00:29:20.297 [2024-11-27 00:03:08.413282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.572 [2024-11-27 00:03:08.437626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.517  [2024-11-27T00:03:10.636Z] Copying: 182/1024 [MB] (182 MBps) [2024-11-27T00:03:12.009Z] Copying: 400/1024 [MB] (218 MBps) [2024-11-27T00:03:12.945Z] Copying: 665/1024 [MB] (265 MBps) [2024-11-27T00:03:13.206Z] Copying: 932/1024 [MB] (267 MBps) [2024-11-27T00:03:13.206Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:29:25.075 00:29:25.075 Calculate MD5 checksum, iteration 1 00:29:25.075 00:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:25.075 00:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:25.075 00:03:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:25.075 00:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:25.075 00:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:25.075 00:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:25.075 00:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:25.075 00:03:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:25.334 [2024-11-27 00:03:13.244851] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:29:25.334 [2024-11-27 00:03:13.245163] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94120 ] 00:29:25.334 [2024-11-27 00:03:13.386542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.334 [2024-11-27 00:03:13.410283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.710  [2024-11-27T00:03:15.408Z] Copying: 641/1024 [MB] (641 MBps) [2024-11-27T00:03:15.408Z] Copying: 1024/1024 [MB] (average 634 MBps) 00:29:27.277 00:29:27.277 00:03:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:27.277 00:03:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:29.890 Fill FTL, iteration 2 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c7c90152cc288dd601c571f9f1569433 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:29.890 00:03:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:29.890 [2024-11-27 00:03:17.652240] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:29:29.890 [2024-11-27 00:03:17.652363] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94175 ] 00:29:29.890 [2024-11-27 00:03:17.799624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.890 [2024-11-27 00:03:17.823847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.279  [2024-11-27T00:03:20.348Z] Copying: 174/1024 [MB] (174 MBps) [2024-11-27T00:03:21.281Z] Copying: 348/1024 [MB] (174 MBps) [2024-11-27T00:03:22.215Z] Copying: 598/1024 [MB] (250 MBps) [2024-11-27T00:03:22.780Z] Copying: 854/1024 [MB] (256 MBps) [2024-11-27T00:03:23.039Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:29:34.908 00:29:34.908 Calculate MD5 checksum, iteration 2 00:29:34.908 00:03:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:34.908 00:03:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:34.908 00:03:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.908 00:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.908 00:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.908 00:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.908 00:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.908 00:03:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.908 [2024-11-27 00:03:22.907613] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:29:34.908 [2024-11-27 00:03:22.907745] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94229 ] 00:29:35.166 [2024-11-27 00:03:23.050425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.166 [2024-11-27 00:03:23.081948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.538  [2024-11-27T00:03:25.234Z] Copying: 653/1024 [MB] (653 MBps) [2024-11-27T00:03:25.802Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:29:37.671 00:29:37.671 00:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:37.671 00:03:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:39.582 00:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:39.582 00:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6faefcf2c44f97531c18658456a4496b 00:29:39.582 00:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:39.583 00:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:39.583 00:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:39.843 [2024-11-27 00:03:27.782863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.843 [2024-11-27 00:03:27.782910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:39.843 [2024-11-27 00:03:27.782920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:39.843 [2024-11-27 00:03:27.782929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.843 [2024-11-27 00:03:27.782946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.843 [2024-11-27 00:03:27.782953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:39.843 [2024-11-27 00:03:27.782959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:39.843 [2024-11-27 00:03:27.782965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.843 [2024-11-27 00:03:27.782980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.843 [2024-11-27 00:03:27.782990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:39.843 [2024-11-27 00:03:27.782995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:39.843 [2024-11-27 00:03:27.783003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.843 [2024-11-27 00:03:27.783052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.179 ms, result 0 00:29:39.843 true 00:29:39.843 00:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:40.103 { 00:29:40.103 "name": "ftl", 00:29:40.103 "properties": [ 00:29:40.103 { 00:29:40.103 "name": "superblock_version", 00:29:40.103 "value": 5, 00:29:40.103 "read-only": true 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "name": "base_device", 00:29:40.103 "bands": [ 00:29:40.103 { 00:29:40.103 "id": 0, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 1, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 2, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 3, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 4, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 5, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 6, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 7, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 8, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 9, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 10, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 11, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 12, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "id": 13, 00:29:40.103 "state": "FREE", 00:29:40.103 "validity": 0.0 00:29:40.103 }, 00:29:40.104 { 00:29:40.104 "id": 14, 00:29:40.104 "state": "FREE", 00:29:40.104 "validity": 0.0 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "id": 15, 00:29:40.104 "state": "FREE", 00:29:40.104 "validity": 0.0 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "id": 16, 00:29:40.104 "state": "FREE", 00:29:40.104 "validity": 0.0 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "id": 17, 00:29:40.104 "state": "FREE", 00:29:40.104 "validity": 0.0 00:29:40.104 } 00:29:40.104 ], 00:29:40.104 "read-only": true 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "name": "cache_device", 00:29:40.104 "type": "bdev", 00:29:40.104 "chunks": [ 00:29:40.104 { 00:29:40.104 "id": 0, 00:29:40.104 "state": "INACTIVE", 00:29:40.104 "utilization": 0.0 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "id": 1, 00:29:40.104 "state": "CLOSED", 00:29:40.104 "utilization": 1.0 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "id": 2, 00:29:40.104 "state": "CLOSED", 00:29:40.104 "utilization": 1.0 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "id": 3, 00:29:40.104 "state": "OPEN", 00:29:40.104 "utilization": 0.001953125 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "id": 4, 00:29:40.104 "state": "OPEN", 00:29:40.104 "utilization": 0.0 00:29:40.104 } 00:29:40.104 ], 00:29:40.104 "read-only": true 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "name": "verbose_mode", 00:29:40.104 "value": true, 00:29:40.104 "unit": "", 00:29:40.104 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:40.104 }, 00:29:40.104 { 00:29:40.104 "name": "prep_upgrade_on_shutdown", 00:29:40.104 "value": false, 00:29:40.104 "unit": "", 00:29:40.104 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:40.104 } 00:29:40.104 ] 00:29:40.104 } 00:29:40.104 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:40.104 [2024-11-27 00:03:28.191185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.104 [2024-11-27 00:03:28.191323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:40.104 [2024-11-27 00:03:28.191335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:40.104 [2024-11-27 00:03:28.191341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.104 [2024-11-27 00:03:28.191362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.104 [2024-11-27 00:03:28.191368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:40.104 [2024-11-27 00:03:28.191373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:40.104 [2024-11-27 00:03:28.191379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.104 [2024-11-27 00:03:28.191393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.104 [2024-11-27 00:03:28.191399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:40.104 [2024-11-27 00:03:28.191405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:40.104 [2024-11-27 00:03:28.191410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.104 [2024-11-27 00:03:28.191455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:29:40.104 true 00:29:40.104 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:40.104 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:40.104 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:40.364 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:40.364 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:40.364 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:40.625 [2024-11-27 00:03:28.582252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.625 [2024-11-27 00:03:28.582283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:40.625 [2024-11-27 00:03:28.582291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:40.625 [2024-11-27 00:03:28.582297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.625 [2024-11-27 00:03:28.582312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.625 [2024-11-27 00:03:28.582319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:40.625 [2024-11-27 00:03:28.582324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:40.625 [2024-11-27 00:03:28.582330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.625 [2024-11-27 00:03:28.582344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.625 [2024-11-27 00:03:28.582349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:40.625 [2024-11-27 00:03:28.582355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:40.625 [2024-11-27 00:03:28.582360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.625 [2024-11-27 00:03:28.582398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.136 ms, result 0 00:29:40.625 true 00:29:40.625 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:40.625 { 00:29:40.625 "name": "ftl", 00:29:40.625 "properties": [ 00:29:40.625 { 00:29:40.625 "name": "superblock_version", 00:29:40.625 "value": 5, 00:29:40.625 "read-only": true 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "name": "base_device", 00:29:40.625 "bands": [ 00:29:40.625 { 00:29:40.625 "id": 0, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 1, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 2, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 3, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 4, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 5, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 6, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 7, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 8, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 9, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 10, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 11, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 12, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 13, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 14, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 15, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 16, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 17, 00:29:40.625 "state": "FREE", 00:29:40.625 "validity": 0.0 00:29:40.625 } 00:29:40.625 ], 00:29:40.625 "read-only": true 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "name": "cache_device", 00:29:40.625 "type": "bdev", 00:29:40.625 "chunks": [ 00:29:40.625 { 00:29:40.625 "id": 0, 00:29:40.625 "state": "INACTIVE", 00:29:40.625 "utilization": 0.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 1, 00:29:40.625 "state": "CLOSED", 00:29:40.625 "utilization": 1.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 2, 00:29:40.625 "state": "CLOSED", 00:29:40.625 "utilization": 1.0 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 3, 00:29:40.625 "state": "OPEN", 00:29:40.625 "utilization": 0.001953125 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "id": 4, 00:29:40.625 "state": "OPEN", 00:29:40.625 "utilization": 0.0 00:29:40.625 } 00:29:40.625 ], 00:29:40.625 "read-only": true 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "name": "verbose_mode", 00:29:40.625 "value": true, 00:29:40.625 "unit": "", 00:29:40.625 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:40.625 }, 00:29:40.625 { 00:29:40.625 "name": "prep_upgrade_on_shutdown", 00:29:40.625 "value": true, 00:29:40.625 "unit": "", 00:29:40.625 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:40.625 } 00:29:40.625 ] 00:29:40.625 } 00:29:40.886 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:40.886 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93894 ]] 00:29:40.886 00:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93894 00:29:40.886 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93894 ']' 00:29:40.886 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93894 00:29:40.886 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:40.886 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.886 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93894 00:29:40.886 killing process with pid 93894 00:29:40.887 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:40.887 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:40.887 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93894' 00:29:40.887 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93894 00:29:40.887 00:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93894 00:29:40.887 [2024-11-27 00:03:28.858412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:40.887 [2024-11-27 00:03:28.862134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.887 [2024-11-27 00:03:28.862163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:40.887 [2024-11-27 00:03:28.862172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:40.887 [2024-11-27 00:03:28.862178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.887 [2024-11-27 00:03:28.862195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:40.887 [2024-11-27 00:03:28.862561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.887 [2024-11-27 00:03:28.862578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:40.887 [2024-11-27 00:03:28.862585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:29:40.887 [2024-11-27 00:03:28.862592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.886 [2024-11-27 00:03:37.428610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.886 [2024-11-27 00:03:37.428666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:50.886 [2024-11-27 00:03:37.428678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8565.969 ms 00:29:50.886 [2024-11-27 00:03:37.428686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.886 [2024-11-27 00:03:37.430290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.886 [2024-11-27 00:03:37.430320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:50.886 [2024-11-27 00:03:37.430328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.590 ms 00:29:50.886 [2024-11-27 00:03:37.430337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.886 [2024-11-27 00:03:37.431208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.886 [2024-11-27 00:03:37.431227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:50.886 [2024-11-27 00:03:37.431238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:29:50.886 [2024-11-27 00:03:37.431244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.886 [2024-11-27 00:03:37.433450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.886 [2024-11-27 00:03:37.433477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:50.886 [2024-11-27 00:03:37.433484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.177 ms 00:29:50.887 [2024-11-27 00:03:37.433490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.436457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.436596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:50.887 [2024-11-27 00:03:37.436609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.942 ms 00:29:50.887 [2024-11-27 00:03:37.436615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.436682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.436689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:50.887 [2024-11-27 00:03:37.436696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:50.887 [2024-11-27 00:03:37.436702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.438171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.438196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:50.887 [2024-11-27 00:03:37.438203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.457 ms 00:29:50.887 [2024-11-27 00:03:37.438208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.439754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.439860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:50.887 [2024-11-27 00:03:37.439871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.522 ms 00:29:50.887 [2024-11-27 00:03:37.439876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.441242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.441267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:50.887 [2024-11-27 00:03:37.441274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.344 ms 00:29:50.887 [2024-11-27 00:03:37.441279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.443121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.443213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:50.887 [2024-11-27 00:03:37.443224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.798 ms 00:29:50.887 [2024-11-27 00:03:37.443230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.443251] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:50.887 [2024-11-27 00:03:37.443261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:50.887 [2024-11-27 00:03:37.443269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:50.887 [2024-11-27 00:03:37.443275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:50.887 [2024-11-27 00:03:37.443282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:50.887 [2024-11-27 00:03:37.443370] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:50.887 [2024-11-27 00:03:37.443375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9424c1e2-1ab0-4a4e-bb8d-8be1dadffe97 00:29:50.887 [2024-11-27 00:03:37.443382] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:50.887 [2024-11-27 00:03:37.443387] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:50.887 [2024-11-27 00:03:37.443392] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:50.887 [2024-11-27 00:03:37.443402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:50.887 [2024-11-27 00:03:37.443407] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:50.887 [2024-11-27 00:03:37.443413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:50.887 [2024-11-27 00:03:37.443419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:50.887 [2024-11-27 00:03:37.443425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:50.887 [2024-11-27 00:03:37.443430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:50.887 [2024-11-27 00:03:37.443437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.443443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:50.887 [2024-11-27 00:03:37.443450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:29:50.887 [2024-11-27 00:03:37.443455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.444708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.444735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:50.887 [2024-11-27 00:03:37.444743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.240 ms 00:29:50.887 [2024-11-27 00:03:37.444748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.444828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.887 [2024-11-27 00:03:37.444835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:50.887 [2024-11-27 00:03:37.444842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:29:50.887 [2024-11-27 00:03:37.444848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.449110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.449140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.887 [2024-11-27 00:03:37.449148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.449153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.449173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.449180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.887 [2024-11-27 00:03:37.449186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.449192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.449230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.449240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.887 [2024-11-27 00:03:37.449246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.449252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.449264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.449270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.887 [2024-11-27 00:03:37.449276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.449284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.457176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.457208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.887 [2024-11-27 00:03:37.457216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.457222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.463929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.463956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.887 [2024-11-27 00:03:37.463964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.463970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.464014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.464022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:50.887 [2024-11-27 00:03:37.464033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.464039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.464063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.464069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:50.887 [2024-11-27 00:03:37.464075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.464081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.464130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.464138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:50.887 [2024-11-27 00:03:37.464144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.887 [2024-11-27 00:03:37.464152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.887 [2024-11-27 00:03:37.464174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.887 [2024-11-27 00:03:37.464181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:50.887 [2024-11-27 00:03:37.464188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.888 [2024-11-27 00:03:37.464194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.888 [2024-11-27 00:03:37.464222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.888 [2024-11-27 00:03:37.464228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:50.888 [2024-11-27 00:03:37.464235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.888 [2024-11-27 00:03:37.464243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.888 [2024-11-27 00:03:37.464277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.888 [2024-11-27 00:03:37.464285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:50.888 [2024-11-27 00:03:37.464291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.888 [2024-11-27 00:03:37.464298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.888 [2024-11-27 00:03:37.464394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8602.205 ms, result 0 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94406 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94406 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94406 ']' 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:53.432 00:03:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.432 [2024-11-27 00:03:41.191211] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:29:53.432 [2024-11-27 00:03:41.191346] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94406 ] 00:29:53.432 [2024-11-27 00:03:41.333757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.432 [2024-11-27 00:03:41.350901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.694 [2024-11-27 00:03:41.601175] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:53.694 [2024-11-27 00:03:41.601231] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:53.694 [2024-11-27 00:03:41.747036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.747070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:53.694 [2024-11-27 00:03:41.747081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:53.694 [2024-11-27 00:03:41.747088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.747123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.747131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:53.694 [2024-11-27 00:03:41.747137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:53.694 [2024-11-27 00:03:41.747143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.747157] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:53.694 [2024-11-27 00:03:41.747325] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:53.694 [2024-11-27 00:03:41.747336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.747342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:53.694 [2024-11-27 00:03:41.747349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:29:53.694 [2024-11-27 00:03:41.747354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.748320] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:53.694 [2024-11-27 00:03:41.750600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.750770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:53.694 [2024-11-27 00:03:41.750783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.283 ms 00:29:53.694 [2024-11-27 00:03:41.750802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.750844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.750852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:53.694 [2024-11-27 00:03:41.750858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:53.694 [2024-11-27 00:03:41.750863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.755119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.755143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:53.694 [2024-11-27 00:03:41.755151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.203 ms 00:29:53.694 [2024-11-27 00:03:41.755156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.755187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.755194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:53.694 [2024-11-27 00:03:41.755200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:53.694 [2024-11-27 00:03:41.755206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.755235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.755246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:53.694 [2024-11-27 00:03:41.755255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:53.694 [2024-11-27 00:03:41.755260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.755277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:53.694 [2024-11-27 00:03:41.756404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.756421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:53.694 [2024-11-27 00:03:41.756428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.131 ms 00:29:53.694 [2024-11-27 00:03:41.756434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.756459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.756466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:53.694 [2024-11-27 00:03:41.756473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:53.694 [2024-11-27 00:03:41.756478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.756493] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:53.694 [2024-11-27 00:03:41.756507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:53.694 [2024-11-27 00:03:41.756534] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:53.694 [2024-11-27 00:03:41.756547] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:53.694 [2024-11-27 00:03:41.756626] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:53.694 [2024-11-27 00:03:41.756636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:53.694 [2024-11-27 00:03:41.756644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:53.694 [2024-11-27 00:03:41.756652] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:53.694 [2024-11-27 00:03:41.756659] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:53.694 [2024-11-27 00:03:41.756665] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:53.694 [2024-11-27 00:03:41.756671] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:53.694 [2024-11-27 00:03:41.756680] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:53.694 [2024-11-27 00:03:41.756686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:53.694 [2024-11-27 00:03:41.756691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.756698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:53.694 [2024-11-27 00:03:41.756704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:29:53.694 [2024-11-27 00:03:41.756710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.756775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.694 [2024-11-27 00:03:41.756782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:53.694 [2024-11-27 00:03:41.756787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:53.694 [2024-11-27 00:03:41.756806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.694 [2024-11-27 00:03:41.756883] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:53.694 [2024-11-27 00:03:41.756907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:53.694 [2024-11-27 00:03:41.756916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:53.694 [2024-11-27 00:03:41.756932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.756938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:53.695 [2024-11-27 00:03:41.756943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.756949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:53.695 [2024-11-27 00:03:41.756954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:53.695 [2024-11-27 00:03:41.756961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:53.695 [2024-11-27 00:03:41.756966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.756971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:53.695 [2024-11-27 00:03:41.756976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:53.695 [2024-11-27 00:03:41.756983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.756988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:53.695 [2024-11-27 00:03:41.756993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:53.695 [2024-11-27 00:03:41.756998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.757006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:53.695 [2024-11-27 00:03:41.757012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:53.695 [2024-11-27 00:03:41.757017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.757022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:53.695 [2024-11-27 00:03:41.757027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:53.695 [2024-11-27 00:03:41.757032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:53.695 [2024-11-27 00:03:41.757037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:53.695 [2024-11-27 00:03:41.757042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:53.695 [2024-11-27 00:03:41.757047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:53.695 [2024-11-27 00:03:41.757052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:53.695 [2024-11-27 00:03:41.757056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:53.695 [2024-11-27 00:03:41.757061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:53.695 [2024-11-27 00:03:41.757066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:53.695 [2024-11-27 00:03:41.757071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:53.695 [2024-11-27 00:03:41.757075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:53.695 [2024-11-27 00:03:41.757080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:53.695 [2024-11-27 00:03:41.757087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:53.695 [2024-11-27 00:03:41.757093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.757098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:53.695 [2024-11-27 00:03:41.757103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:53.695 [2024-11-27 00:03:41.757107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.757112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:53.695 [2024-11-27 00:03:41.757117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:53.695 [2024-11-27 00:03:41.757122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.757126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:53.695 [2024-11-27 00:03:41.757131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:53.695 [2024-11-27 00:03:41.757135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.757140] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:53.695 [2024-11-27 00:03:41.757150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:53.695 [2024-11-27 00:03:41.757155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:53.695 [2024-11-27 00:03:41.757160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:53.695 [2024-11-27 00:03:41.757166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:53.695 [2024-11-27 00:03:41.757174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:53.695 [2024-11-27 00:03:41.757180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:53.695 [2024-11-27 00:03:41.757185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:53.695 [2024-11-27 00:03:41.757190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:53.695 [2024-11-27 00:03:41.757195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:53.695 [2024-11-27 00:03:41.757202] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:53.695 [2024-11-27 00:03:41.757208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:53.695 [2024-11-27 00:03:41.757220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:53.695 [2024-11-27 00:03:41.757236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:53.695 [2024-11-27 00:03:41.757241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:53.695 [2024-11-27 00:03:41.757247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:53.695 [2024-11-27 00:03:41.757252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:53.695 [2024-11-27 00:03:41.757292] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:53.695 [2024-11-27 00:03:41.757298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:53.695 [2024-11-27 00:03:41.757312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:53.695 [2024-11-27 00:03:41.757317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:53.695 [2024-11-27 00:03:41.757326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:53.695 [2024-11-27 00:03:41.757332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:53.695 [2024-11-27 00:03:41.757341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:53.695 [2024-11-27 00:03:41.757347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.503 ms 00:29:53.695 [2024-11-27 00:03:41.757352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.695 [2024-11-27 00:03:41.757384] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:53.695 [2024-11-27 00:03:41.757393] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:57.906 [2024-11-27 00:03:45.222004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.906 [2024-11-27 00:03:45.222178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:57.906 [2024-11-27 00:03:45.222195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3464.607 ms 00:29:57.906 [2024-11-27 00:03:45.222207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.906 [2024-11-27 00:03:45.229265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.906 [2024-11-27 00:03:45.229392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:57.906 [2024-11-27 00:03:45.229407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.959 ms 00:29:57.906 [2024-11-27 00:03:45.229414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.906 [2024-11-27 00:03:45.229453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.906 [2024-11-27 00:03:45.229459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:57.906 [2024-11-27 00:03:45.229466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:57.906 [2024-11-27 00:03:45.229476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.906 [2024-11-27 00:03:45.236772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.906 [2024-11-27 00:03:45.236912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:57.906 [2024-11-27 00:03:45.236926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.260 ms 00:29:57.906 [2024-11-27 00:03:45.236932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.906 [2024-11-27 00:03:45.236955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.906 [2024-11-27 00:03:45.236962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:57.906 [2024-11-27 00:03:45.236972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:57.906 [2024-11-27 00:03:45.236977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.906 [2024-11-27 00:03:45.237266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.906 [2024-11-27 00:03:45.237278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:57.906 [2024-11-27 00:03:45.237286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:29:57.906 [2024-11-27 00:03:45.237291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.906 [2024-11-27 00:03:45.237321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.906 [2024-11-27 00:03:45.237332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:57.906 [2024-11-27 00:03:45.237339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:57.906 [2024-11-27 00:03:45.237348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.906 [2024-11-27 00:03:45.242190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.906 [2024-11-27 00:03:45.242216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:57.906 [2024-11-27 00:03:45.242229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.822 ms 00:29:57.906 [2024-11-27 00:03:45.242235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.255106] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:57.907 [2024-11-27 00:03:45.255154] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:57.907 [2024-11-27 00:03:45.255181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.255192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:57.907 [2024-11-27 00:03:45.255203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.860 ms 00:29:57.907 [2024-11-27 00:03:45.255213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.260068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.260244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:57.907 [2024-11-27 00:03:45.260265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.807 ms 00:29:57.907 [2024-11-27 00:03:45.260275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.261957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.261990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:57.907 [2024-11-27 00:03:45.262002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.639 ms 00:29:57.907 [2024-11-27 00:03:45.262010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.263707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.263839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:57.907 [2024-11-27 00:03:45.263853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.659 ms 00:29:57.907 [2024-11-27 00:03:45.263860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.264181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.264194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:57.907 [2024-11-27 00:03:45.264203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.247 ms 00:29:57.907 [2024-11-27 00:03:45.264210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.280883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.280913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:57.907 [2024-11-27 00:03:45.280922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.653 ms 00:29:57.907 [2024-11-27 00:03:45.280928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.286531] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:57.907 [2024-11-27 00:03:45.287107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.287137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:57.907 [2024-11-27 00:03:45.287147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.148 ms 00:29:57.907 [2024-11-27 00:03:45.287153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.287191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.287199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:57.907 [2024-11-27 00:03:45.287206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:57.907 [2024-11-27 00:03:45.287212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.287258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.287266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:57.907 [2024-11-27 00:03:45.287275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:57.907 [2024-11-27 00:03:45.287280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.287302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.287309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:57.907 [2024-11-27 00:03:45.287315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:57.907 [2024-11-27 00:03:45.287320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.287344] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:57.907 [2024-11-27 00:03:45.287351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.287357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:57.907 [2024-11-27 00:03:45.287365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:57.907 [2024-11-27 00:03:45.287371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.290550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.290658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:57.907 [2024-11-27 00:03:45.290670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.164 ms 00:29:57.907 [2024-11-27 00:03:45.290677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.290730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.290737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:57.907 [2024-11-27 00:03:45.290743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:57.907 [2024-11-27 00:03:45.290751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.291494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3544.130 ms, result 0 00:29:57.907 [2024-11-27 00:03:45.303772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.907 [2024-11-27 00:03:45.319780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:57.907 [2024-11-27 00:03:45.327847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:57.907 00:03:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.907 00:03:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:57.907 00:03:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:57.907 00:03:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:57.907 00:03:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:57.907 [2024-11-27 00:03:45.635949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.635977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:57.907 [2024-11-27 00:03:45.635986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:57.907 [2024-11-27 00:03:45.635992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.636009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.636015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:57.907 [2024-11-27 00:03:45.636023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:57.907 [2024-11-27 00:03:45.636029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.636043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:57.907 [2024-11-27 00:03:45.636049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:57.907 [2024-11-27 00:03:45.636055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:57.907 [2024-11-27 00:03:45.636063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:57.907 [2024-11-27 00:03:45.636101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.144 ms, result 0 00:29:57.907 true 00:29:57.907 00:03:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:57.907 { 00:29:57.907 "name": "ftl", 00:29:57.907 "properties": [ 00:29:57.907 { 00:29:57.907 "name": "superblock_version", 00:29:57.907 "value": 5, 00:29:57.907 "read-only": true 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "name": "base_device", 00:29:57.907 "bands": [ 00:29:57.907 { 00:29:57.907 "id": 0, 00:29:57.907 "state": "CLOSED", 00:29:57.907 "validity": 1.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 1, 00:29:57.907 "state": "CLOSED", 00:29:57.907 "validity": 1.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 2, 00:29:57.907 "state": "CLOSED", 00:29:57.907 "validity": 0.007843137254901933 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 3, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 4, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 5, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 6, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 7, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 8, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 9, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 10, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 11, 00:29:57.907 "state": "FREE", 00:29:57.907 "validity": 0.0 00:29:57.907 }, 00:29:57.907 { 00:29:57.907 "id": 12, 00:29:57.907 "state": "FREE", 00:29:57.908 "validity": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 13, 00:29:57.908 "state": "FREE", 00:29:57.908 "validity": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 14, 00:29:57.908 "state": "FREE", 00:29:57.908 "validity": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 15, 00:29:57.908 "state": "FREE", 00:29:57.908 "validity": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 16, 00:29:57.908 "state": "FREE", 00:29:57.908 "validity": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 17, 00:29:57.908 "state": "FREE", 00:29:57.908 "validity": 0.0 00:29:57.908 } 00:29:57.908 ], 00:29:57.908 "read-only": true 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "name": "cache_device", 00:29:57.908 "type": "bdev", 00:29:57.908 "chunks": [ 00:29:57.908 { 00:29:57.908 "id": 0, 00:29:57.908 "state": "INACTIVE", 00:29:57.908 "utilization": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 1, 00:29:57.908 "state": "OPEN", 00:29:57.908 "utilization": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 2, 00:29:57.908 "state": "OPEN", 00:29:57.908 "utilization": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 3, 00:29:57.908 "state": "FREE", 00:29:57.908 "utilization": 0.0 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "id": 4, 00:29:57.908 "state": "FREE", 00:29:57.908 "utilization": 0.0 00:29:57.908 } 00:29:57.908 ], 00:29:57.908 "read-only": true 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "name": "verbose_mode", 00:29:57.908 "value": true, 00:29:57.908 "unit": "", 00:29:57.908 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:57.908 }, 00:29:57.908 { 00:29:57.908 "name": "prep_upgrade_on_shutdown", 00:29:57.908 "value": false, 00:29:57.908 "unit": "", 00:29:57.908 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:57.908 } 00:29:57.908 ] 00:29:57.908 } 00:29:57.908 00:03:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:57.908 00:03:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:57.908 00:03:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:58.168 Validate MD5 checksum, iteration 1 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:58.168 00:03:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:58.430 [2024-11-27 00:03:46.328215] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:29:58.430 [2024-11-27 00:03:46.328469] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94476 ] 00:29:58.430 [2024-11-27 00:03:46.476202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.430 [2024-11-27 00:03:46.499988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.818  [2024-11-27T00:03:48.894Z] Copying: 499/1024 [MB] (499 MBps) [2024-11-27T00:03:49.466Z] Copying: 1024/1024 [MB] (average 583 MBps) 00:30:01.335 00:30:01.335 00:03:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:01.335 00:03:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c7c90152cc288dd601c571f9f1569433 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c7c90152cc288dd601c571f9f1569433 != \c\7\c\9\0\1\5\2\c\c\2\8\8\d\d\6\0\1\c\5\7\1\f\9\f\1\5\6\9\4\3\3 ]] 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:03.886 Validate MD5 checksum, iteration 2 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:03.886 00:03:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:03.886 [2024-11-27 00:03:51.551851] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:30:03.886 [2024-11-27 00:03:51.551970] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94534 ] 00:30:03.886 [2024-11-27 00:03:51.699836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.886 [2024-11-27 00:03:51.724064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.273  [2024-11-27T00:03:53.977Z] Copying: 705/1024 [MB] (705 MBps) [2024-11-27T00:03:54.549Z] Copying: 1024/1024 [MB] (average 612 MBps) 00:30:06.418 00:30:06.418 00:03:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:06.418 00:03:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6faefcf2c44f97531c18658456a4496b 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6faefcf2c44f97531c18658456a4496b != \6\f\a\e\f\c\f\2\c\4\4\f\9\7\5\3\1\c\1\8\6\5\8\4\5\6\a\4\4\9\6\b ]] 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94406 ]] 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94406 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94589 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94589 00:30:08.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94589 ']' 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.361 00:03:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:08.361 [2024-11-27 00:03:56.370480] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:30:08.361 [2024-11-27 00:03:56.370572] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94589 ] 00:30:08.622 [2024-11-27 00:03:56.508093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.622 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94406 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:08.622 [2024-11-27 00:03:56.525092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.884 [2024-11-27 00:03:56.777940] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:08.884 [2024-11-27 00:03:56.778160] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:08.884 [2024-11-27 00:03:56.923639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.884 [2024-11-27 00:03:56.923674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:08.884 [2024-11-27 00:03:56.923685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:08.884 [2024-11-27 00:03:56.923691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.884 [2024-11-27 00:03:56.923728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.884 [2024-11-27 00:03:56.923737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:08.884 [2024-11-27 00:03:56.923744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:08.884 [2024-11-27 00:03:56.923749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.884 [2024-11-27 00:03:56.923763] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:08.884 [2024-11-27 00:03:56.923955] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:08.884 [2024-11-27 00:03:56.923967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.884 [2024-11-27 00:03:56.923973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:08.884 [2024-11-27 00:03:56.923980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:30:08.884 [2024-11-27 00:03:56.923985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.884 [2024-11-27 00:03:56.924168] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:08.884 [2024-11-27 00:03:56.927907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.884 [2024-11-27 00:03:56.927938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:08.884 [2024-11-27 00:03:56.927946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.740 ms 00:30:08.884 [2024-11-27 00:03:56.927952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.884 [2024-11-27 00:03:56.928882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.884 [2024-11-27 00:03:56.928914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:08.884 [2024-11-27 00:03:56.928924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:08.885 [2024-11-27 00:03:56.928930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.885 [2024-11-27 00:03:56.929161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.885 [2024-11-27 00:03:56.929174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:08.885 [2024-11-27 00:03:56.929182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:30:08.885 [2024-11-27 00:03:56.929187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.885 [2024-11-27 00:03:56.929213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.885 [2024-11-27 00:03:56.929219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:08.885 [2024-11-27 00:03:56.929225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:08.885 [2024-11-27 00:03:56.929231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.885 [2024-11-27 00:03:56.929249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.885 [2024-11-27 00:03:56.929258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:08.885 [2024-11-27 00:03:56.929267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:08.885 [2024-11-27 00:03:56.929273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.885 [2024-11-27 00:03:56.929289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:08.885 [2024-11-27 00:03:56.930004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.885 [2024-11-27 00:03:56.930021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:08.885 [2024-11-27 00:03:56.930028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.718 ms 00:30:08.885 [2024-11-27 00:03:56.930034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.885 [2024-11-27 00:03:56.930052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.885 [2024-11-27 00:03:56.930061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:08.885 [2024-11-27 00:03:56.930067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:08.885 [2024-11-27 00:03:56.930072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.885 [2024-11-27 00:03:56.930088] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:08.885 [2024-11-27 00:03:56.930112] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:08.885 [2024-11-27 00:03:56.930140] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:08.885 [2024-11-27 00:03:56.930154] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:08.885 [2024-11-27 00:03:56.930236] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:08.885 [2024-11-27 00:03:56.930244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:08.885 [2024-11-27 00:03:56.930252] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:08.885 [2024-11-27 00:03:56.930259] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:08.885 [2024-11-27 00:03:56.930267] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:08.885 [2024-11-27 00:03:56.930273] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:08.885 [2024-11-27 00:03:56.930279] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:08.885 [2024-11-27 00:03:56.930284] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:08.885 [2024-11-27 00:03:56.930290] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:08.885 [2024-11-27 00:03:56.930296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.885 [2024-11-27 00:03:56.930303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:08.885 [2024-11-27 00:03:56.930309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:30:08.885 [2024-11-27 00:03:56.930314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.885 [2024-11-27 00:03:56.930378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.885 [2024-11-27 00:03:56.930385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:08.885 [2024-11-27 00:03:56.930392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:08.885 [2024-11-27 00:03:56.930400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.885 [2024-11-27 00:03:56.930475] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:08.885 [2024-11-27 00:03:56.930483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:08.885 [2024-11-27 00:03:56.930489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:08.885 [2024-11-27 00:03:56.930497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.885 [2024-11-27 00:03:56.930505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:08.885 [2024-11-27 00:03:56.930510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:08.885 [2024-11-27 00:03:56.930515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:08.885 [2024-11-27 00:03:56.930520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:08.885 [2024-11-27 00:03:56.930525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:08.885 [2024-11-27 00:03:56.930530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.885 [2024-11-27 00:03:56.930535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:08.885 [2024-11-27 00:03:56.930540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:08.885 [2024-11-27 00:03:56.930545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.885 [2024-11-27 00:03:56.930550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:08.885 [2024-11-27 00:03:56.930558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:08.885 [2024-11-27 00:03:56.930563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.885 [2024-11-27 00:03:56.930568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:08.885 [2024-11-27 00:03:56.930573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:08.885 [2024-11-27 00:03:56.930577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.885 [2024-11-27 00:03:56.930582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:08.885 [2024-11-27 00:03:56.930587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:08.885 [2024-11-27 00:03:56.930592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:08.885 [2024-11-27 00:03:56.930597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:08.885 [2024-11-27 00:03:56.930602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:08.885 [2024-11-27 00:03:56.930608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:08.885 [2024-11-27 00:03:56.930613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:08.885 [2024-11-27 00:03:56.930618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:08.885 [2024-11-27 00:03:56.930623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:08.885 [2024-11-27 00:03:56.930628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:08.885 [2024-11-27 00:03:56.930633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:08.885 [2024-11-27 00:03:56.930639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:08.885 [2024-11-27 00:03:56.930644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:08.885 [2024-11-27 00:03:56.930649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:08.885 [2024-11-27 00:03:56.930654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.885 [2024-11-27 00:03:56.930660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:08.886 [2024-11-27 00:03:56.930666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:08.886 [2024-11-27 00:03:56.930671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.886 [2024-11-27 00:03:56.930677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:08.886 [2024-11-27 00:03:56.930684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:08.886 [2024-11-27 00:03:56.930690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.886 [2024-11-27 00:03:56.930696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:08.886 [2024-11-27 00:03:56.930701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:08.886 [2024-11-27 00:03:56.930707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.886 [2024-11-27 00:03:56.930712] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:08.886 [2024-11-27 00:03:56.930719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:08.886 [2024-11-27 00:03:56.930725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:08.886 [2024-11-27 00:03:56.930734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:08.886 [2024-11-27 00:03:56.930740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:08.886 [2024-11-27 00:03:56.930746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:08.886 [2024-11-27 00:03:56.930752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:08.886 [2024-11-27 00:03:56.930758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:08.886 [2024-11-27 00:03:56.930764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:08.886 [2024-11-27 00:03:56.930769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:08.886 [2024-11-27 00:03:56.930776] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:08.886 [2024-11-27 00:03:56.930784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:08.886 [2024-11-27 00:03:56.930811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:08.886 [2024-11-27 00:03:56.930829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:08.886 [2024-11-27 00:03:56.930835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:08.886 [2024-11-27 00:03:56.930842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:08.886 [2024-11-27 00:03:56.930851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:08.886 [2024-11-27 00:03:56.930895] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:08.886 [2024-11-27 00:03:56.930904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:08.886 [2024-11-27 00:03:56.930924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:08.886 [2024-11-27 00:03:56.930930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:08.886 [2024-11-27 00:03:56.930936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:08.886 [2024-11-27 00:03:56.930943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.930951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:08.886 [2024-11-27 00:03:56.930957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:30:08.886 [2024-11-27 00:03:56.930967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.886 [2024-11-27 00:03:56.937052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.937144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:08.886 [2024-11-27 00:03:56.937190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.047 ms 00:30:08.886 [2024-11-27 00:03:56.937208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.886 [2024-11-27 00:03:56.937246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.937264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:08.886 [2024-11-27 00:03:56.937282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:08.886 [2024-11-27 00:03:56.937296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.886 [2024-11-27 00:03:56.944641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.944734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:08.886 [2024-11-27 00:03:56.944772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.302 ms 00:30:08.886 [2024-11-27 00:03:56.944798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.886 [2024-11-27 00:03:56.944831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.944853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:08.886 [2024-11-27 00:03:56.944869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:08.886 [2024-11-27 00:03:56.944886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.886 [2024-11-27 00:03:56.944956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.944975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:08.886 [2024-11-27 00:03:56.944995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:08.886 [2024-11-27 00:03:56.945050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.886 [2024-11-27 00:03:56.945094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.945115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:08.886 [2024-11-27 00:03:56.945132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:08.886 [2024-11-27 00:03:56.945174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.886 [2024-11-27 00:03:56.950061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.950220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:08.886 [2024-11-27 00:03:56.950266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.857 ms 00:30:08.886 [2024-11-27 00:03:56.950284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.886 [2024-11-27 00:03:56.950364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.886 [2024-11-27 00:03:56.950383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:08.887 [2024-11-27 00:03:56.950401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:08.887 [2024-11-27 00:03:56.950415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.887 [2024-11-27 00:03:56.961393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.887 [2024-11-27 00:03:56.961496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:08.887 [2024-11-27 00:03:56.961706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.956 ms 00:30:08.887 [2024-11-27 00:03:56.961732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.887 [2024-11-27 00:03:56.963095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.887 [2024-11-27 00:03:56.963225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:08.887 [2024-11-27 00:03:56.963371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.334 ms 00:30:08.887 [2024-11-27 00:03:56.963400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.887 [2024-11-27 00:03:56.981128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.887 [2024-11-27 00:03:56.981222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:08.887 [2024-11-27 00:03:56.981267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.668 ms 00:30:08.887 [2024-11-27 00:03:56.981285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.887 [2024-11-27 00:03:56.981395] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:08.887 [2024-11-27 00:03:56.981487] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:08.887 [2024-11-27 00:03:56.981608] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:08.887 [2024-11-27 00:03:56.981723] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:08.887 [2024-11-27 00:03:56.981764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.887 [2024-11-27 00:03:56.981781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:08.887 [2024-11-27 00:03:56.981834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.429 ms 00:30:08.887 [2024-11-27 00:03:56.981851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.887 [2024-11-27 00:03:56.981888] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:08.887 [2024-11-27 00:03:56.981940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.887 [2024-11-27 00:03:56.981955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:08.887 [2024-11-27 00:03:56.981970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:08.887 [2024-11-27 00:03:56.981986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.887 [2024-11-27 00:03:56.984128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.887 [2024-11-27 00:03:56.984217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:08.887 [2024-11-27 00:03:56.984269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.117 ms 00:30:08.887 [2024-11-27 00:03:56.984290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.887 [2024-11-27 00:03:56.984773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.887 [2024-11-27 00:03:56.984863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:08.887 [2024-11-27 00:03:56.984924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:08.887 [2024-11-27 00:03:56.984944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.887 [2024-11-27 00:03:56.984994] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:08.887 [2024-11-27 00:03:56.985187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.887 [2024-11-27 00:03:56.985255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:08.887 [2024-11-27 00:03:56.985280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:30:08.887 [2024-11-27 00:03:56.985324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.458 [2024-11-27 00:03:57.569358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.458 [2024-11-27 00:03:57.569470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:09.458 [2024-11-27 00:03:57.569516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 583.786 ms 00:30:09.458 [2024-11-27 00:03:57.569534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.458 [2024-11-27 00:03:57.570968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.458 [2024-11-27 00:03:57.571062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:09.458 [2024-11-27 00:03:57.571107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.149 ms 00:30:09.458 [2024-11-27 00:03:57.571124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.458 [2024-11-27 00:03:57.571525] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:09.458 [2024-11-27 00:03:57.571567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.458 [2024-11-27 00:03:57.571651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:09.458 [2024-11-27 00:03:57.571670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:30:09.458 [2024-11-27 00:03:57.571685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.458 [2024-11-27 00:03:57.571716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.458 [2024-11-27 00:03:57.571738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:09.458 [2024-11-27 00:03:57.571754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:09.458 [2024-11-27 00:03:57.571769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.458 [2024-11-27 00:03:57.571815] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 586.818 ms, result 0 00:30:09.458 [2024-11-27 00:03:57.571870] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:09.458 [2024-11-27 00:03:57.571984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.458 [2024-11-27 00:03:57.572051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:09.458 [2024-11-27 00:03:57.572072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.115 ms 00:30:09.458 [2024-11-27 00:03:57.572086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.398 [2024-11-27 00:03:58.233400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.233503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:10.399 [2024-11-27 00:03:58.233546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 661.023 ms 00:30:10.399 [2024-11-27 00:03:58.233563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.234986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.235076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:10.399 [2024-11-27 00:03:58.235120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.127 ms 00:30:10.399 [2024-11-27 00:03:58.235138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.235564] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:10.399 [2024-11-27 00:03:58.235606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.235658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:10.399 [2024-11-27 00:03:58.235703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.439 ms 00:30:10.399 [2024-11-27 00:03:58.235719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.235808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.235828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:10.399 [2024-11-27 00:03:58.235844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:10.399 [2024-11-27 00:03:58.235862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.235900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 664.032 ms, result 0 00:30:10.399 [2024-11-27 00:03:58.236009] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:10.399 [2024-11-27 00:03:58.236041] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:10.399 [2024-11-27 00:03:58.236097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.236114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:10.399 [2024-11-27 00:03:58.236132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1251.113 ms 00:30:10.399 [2024-11-27 00:03:58.236184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.236220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.236237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:10.399 [2024-11-27 00:03:58.236253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:10.399 [2024-11-27 00:03:58.236295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.242159] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:10.399 [2024-11-27 00:03:58.242302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.242327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:10.399 [2024-11-27 00:03:58.242375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.983 ms 00:30:10.399 [2024-11-27 00:03:58.242393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.242906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.242964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:10.399 [2024-11-27 00:03:58.243003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.461 ms 00:30:10.399 [2024-11-27 00:03:58.243021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.244700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.244764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:10.399 [2024-11-27 00:03:58.244821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.657 ms 00:30:10.399 [2024-11-27 00:03:58.244839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.244891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.244910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:10.399 [2024-11-27 00:03:58.244926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:10.399 [2024-11-27 00:03:58.244941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.245027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.245115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:10.399 [2024-11-27 00:03:58.245136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:10.399 [2024-11-27 00:03:58.245151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.245177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.245215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:10.399 [2024-11-27 00:03:58.245232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:10.399 [2024-11-27 00:03:58.245249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.245281] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:10.399 [2024-11-27 00:03:58.245323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.245340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:10.399 [2024-11-27 00:03:58.245355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:30:10.399 [2024-11-27 00:03:58.245372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.245424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.399 [2024-11-27 00:03:58.245442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:10.399 [2024-11-27 00:03:58.245462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:10.399 [2024-11-27 00:03:58.245698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.399 [2024-11-27 00:03:58.247549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1323.069 ms, result 0 00:30:10.399 [2024-11-27 00:03:58.262799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.399 [2024-11-27 00:03:58.278807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:10.399 [2024-11-27 00:03:58.286905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:10.969 Validate MD5 checksum, iteration 1 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:10.969 00:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:10.969 [2024-11-27 00:03:59.026589] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:30:10.969 [2024-11-27 00:03:59.026991] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94623 ] 00:30:11.229 [2024-11-27 00:03:59.179148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.229 [2024-11-27 00:03:59.205522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.613  [2024-11-27T00:04:01.686Z] Copying: 507/1024 [MB] (507 MBps) [2024-11-27T00:04:01.686Z] Copying: 997/1024 [MB] (490 MBps) [2024-11-27T00:04:02.255Z] Copying: 1024/1024 [MB] (average 501 MBps) 00:30:14.124 00:30:14.124 00:04:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:14.124 00:04:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c7c90152cc288dd601c571f9f1569433 00:30:16.669 Validate MD5 checksum, iteration 2 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c7c90152cc288dd601c571f9f1569433 != \c\7\c\9\0\1\5\2\c\c\2\8\8\d\d\6\0\1\c\5\7\1\f\9\f\1\5\6\9\4\3\3 ]] 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:16.669 00:04:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:16.669 [2024-11-27 00:04:04.422582] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:30:16.669 [2024-11-27 00:04:04.423330] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94684 ] 00:30:16.669 [2024-11-27 00:04:04.560778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.669 [2024-11-27 00:04:04.583038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.043  [2024-11-27T00:04:06.743Z] Copying: 639/1024 [MB] (639 MBps) [2024-11-27T00:04:10.953Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:30:22.822 00:30:22.822 00:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:22.822 00:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6faefcf2c44f97531c18658456a4496b 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6faefcf2c44f97531c18658456a4496b != \6\f\a\e\f\c\f\2\c\4\4\f\9\7\5\3\1\c\1\8\6\5\8\4\5\6\a\4\4\9\6\b ]] 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94589 ]] 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94589 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94589 ']' 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94589 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94589 00:30:24.735 killing process with pid 94589 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94589' 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94589 00:30:24.735 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94589 00:30:24.735 [2024-11-27 00:04:12.733661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:24.735 [2024-11-27 00:04:12.737110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.737143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:24.735 [2024-11-27 00:04:12.737153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:24.735 [2024-11-27 00:04:12.737159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.737177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:24.735 [2024-11-27 00:04:12.737548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.737569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:24.735 [2024-11-27 00:04:12.737579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:30:24.735 [2024-11-27 00:04:12.737586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.737777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.737800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:24.735 [2024-11-27 00:04:12.737807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:30:24.735 [2024-11-27 00:04:12.737813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.738882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.739024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:24.735 [2024-11-27 00:04:12.739036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.056 ms 00:30:24.735 [2024-11-27 00:04:12.739047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.739916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.739933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:24.735 [2024-11-27 00:04:12.739940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:30:24.735 [2024-11-27 00:04:12.739947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.741608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.741646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:24.735 [2024-11-27 00:04:12.741661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.627 ms 00:30:24.735 [2024-11-27 00:04:12.741668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.743049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.743076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:24.735 [2024-11-27 00:04:12.743084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.352 ms 00:30:24.735 [2024-11-27 00:04:12.743090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.743149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.743158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:24.735 [2024-11-27 00:04:12.743164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:24.735 [2024-11-27 00:04:12.743174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.744289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.744322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:24.735 [2024-11-27 00:04:12.744329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.102 ms 00:30:24.735 [2024-11-27 00:04:12.744334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.745434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.745458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:24.735 [2024-11-27 00:04:12.745464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.075 ms 00:30:24.735 [2024-11-27 00:04:12.745470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.746774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.746812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:24.735 [2024-11-27 00:04:12.746819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.281 ms 00:30:24.735 [2024-11-27 00:04:12.746825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.747925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.735 [2024-11-27 00:04:12.747951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:24.735 [2024-11-27 00:04:12.747958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.054 ms 00:30:24.735 [2024-11-27 00:04:12.747963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.735 [2024-11-27 00:04:12.747987] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:24.735 [2024-11-27 00:04:12.747998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:24.735 [2024-11-27 00:04:12.748006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:24.735 [2024-11-27 00:04:12.748012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:24.735 [2024-11-27 00:04:12.748018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:24.735 [2024-11-27 00:04:12.748024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:24.735 [2024-11-27 00:04:12.748030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:24.735 [2024-11-27 00:04:12.748036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:24.735 [2024-11-27 00:04:12.748041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:24.735 [2024-11-27 00:04:12.748047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:24.735 [2024-11-27 00:04:12.748053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:24.735 [2024-11-27 00:04:12.748059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:24.736 [2024-11-27 00:04:12.748065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:24.736 [2024-11-27 00:04:12.748071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:24.736 [2024-11-27 00:04:12.748076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:24.736 [2024-11-27 00:04:12.748082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:24.736 [2024-11-27 00:04:12.748088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:24.736 [2024-11-27 00:04:12.748093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:24.736 [2024-11-27 00:04:12.748099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:24.736 [2024-11-27 00:04:12.748106] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:24.736 [2024-11-27 00:04:12.748112] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9424c1e2-1ab0-4a4e-bb8d-8be1dadffe97 00:30:24.736 [2024-11-27 00:04:12.748117] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:24.736 [2024-11-27 00:04:12.748123] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:24.736 [2024-11-27 00:04:12.748129] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:24.736 [2024-11-27 00:04:12.748135] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:24.736 [2024-11-27 00:04:12.748141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:24.736 [2024-11-27 00:04:12.748146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:24.736 [2024-11-27 00:04:12.748155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:24.736 [2024-11-27 00:04:12.748160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:24.736 [2024-11-27 00:04:12.748165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:24.736 [2024-11-27 00:04:12.748171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.736 [2024-11-27 00:04:12.748177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:24.736 [2024-11-27 00:04:12.748184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:30:24.736 [2024-11-27 00:04:12.748190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.749411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.736 [2024-11-27 00:04:12.749441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:24.736 [2024-11-27 00:04:12.749449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.209 ms 00:30:24.736 [2024-11-27 00:04:12.749454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.749525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.736 [2024-11-27 00:04:12.749531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:24.736 [2024-11-27 00:04:12.749537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:24.736 [2024-11-27 00:04:12.749543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.753993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.754018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:24.736 [2024-11-27 00:04:12.754026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.754035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.754056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.754064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:24.736 [2024-11-27 00:04:12.754070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.754075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.754138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.754146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:24.736 [2024-11-27 00:04:12.754153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.754159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.754175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.754181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:24.736 [2024-11-27 00:04:12.754187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.754192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.762077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.762122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:24.736 [2024-11-27 00:04:12.762130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.762136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.768118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.768305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:24.736 [2024-11-27 00:04:12.768318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.768325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.768933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.768970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:24.736 [2024-11-27 00:04:12.768989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.769056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.769120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.769142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:24.736 [2024-11-27 00:04:12.769158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.769208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.769273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.769292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:24.736 [2024-11-27 00:04:12.769309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.769351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.769390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.769408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:24.736 [2024-11-27 00:04:12.769427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.769481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.769519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.769537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:24.736 [2024-11-27 00:04:12.769553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.769567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.769632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.736 [2024-11-27 00:04:12.769646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:24.736 [2024-11-27 00:04:12.769652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.736 [2024-11-27 00:04:12.769658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.736 [2024-11-27 00:04:12.769751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 32.622 ms, result 0 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:24.998 Remove shared memory files 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94406 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:24.998 ************************************ 00:30:24.998 END TEST ftl_upgrade_shutdown 00:30:24.998 ************************************ 00:30:24.998 00:30:24.998 real 1m16.418s 00:30:24.998 user 1m40.384s 00:30:24.998 sys 0m21.278s 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.998 00:04:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:24.998 00:04:12 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:24.998 00:04:12 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:24.998 00:04:12 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:24.998 00:04:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.998 00:04:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:24.998 ************************************ 00:30:24.998 START TEST ftl_restore_fast 00:30:24.998 ************************************ 00:30:24.998 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:24.998 * Looking for test storage... 00:30:24.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:24.998 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:24.998 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:30:24.998 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.259 --rc genhtml_branch_coverage=1 00:30:25.259 --rc genhtml_function_coverage=1 00:30:25.259 --rc genhtml_legend=1 00:30:25.259 --rc geninfo_all_blocks=1 00:30:25.259 --rc geninfo_unexecuted_blocks=1 00:30:25.259 00:30:25.259 ' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.259 --rc genhtml_branch_coverage=1 00:30:25.259 --rc genhtml_function_coverage=1 00:30:25.259 --rc genhtml_legend=1 00:30:25.259 --rc geninfo_all_blocks=1 00:30:25.259 --rc geninfo_unexecuted_blocks=1 00:30:25.259 00:30:25.259 ' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.259 --rc genhtml_branch_coverage=1 00:30:25.259 --rc genhtml_function_coverage=1 00:30:25.259 --rc genhtml_legend=1 00:30:25.259 --rc geninfo_all_blocks=1 00:30:25.259 --rc geninfo_unexecuted_blocks=1 00:30:25.259 00:30:25.259 ' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.259 --rc genhtml_branch_coverage=1 00:30:25.259 --rc genhtml_function_coverage=1 00:30:25.259 --rc genhtml_legend=1 00:30:25.259 --rc geninfo_all_blocks=1 00:30:25.259 --rc geninfo_unexecuted_blocks=1 00:30:25.259 00:30:25.259 ' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.5ikPCZkUgj 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:25.259 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94854 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94854 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94854 ']' 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.260 00:04:13 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:25.260 [2024-11-27 00:04:13.257024] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:30:25.260 [2024-11-27 00:04:13.257327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94854 ] 00:30:25.520 [2024-11-27 00:04:13.400106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.520 [2024-11-27 00:04:13.416942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.090 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.090 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:26.090 00:04:14 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:26.090 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:26.090 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:26.090 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:26.090 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:26.090 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:26.350 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:26.350 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:26.350 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:26.350 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:26.350 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:26.350 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:26.350 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:26.350 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:26.610 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:26.610 { 00:30:26.610 "name": "nvme0n1", 00:30:26.610 "aliases": [ 00:30:26.610 "98d6d2b1-751a-4ed5-9782-0477c1be9c80" 00:30:26.610 ], 00:30:26.610 "product_name": "NVMe disk", 00:30:26.610 "block_size": 4096, 00:30:26.610 "num_blocks": 1310720, 00:30:26.610 "uuid": "98d6d2b1-751a-4ed5-9782-0477c1be9c80", 00:30:26.610 "numa_id": -1, 00:30:26.610 "assigned_rate_limits": { 00:30:26.610 "rw_ios_per_sec": 0, 00:30:26.610 "rw_mbytes_per_sec": 0, 00:30:26.610 "r_mbytes_per_sec": 0, 00:30:26.610 "w_mbytes_per_sec": 0 00:30:26.610 }, 00:30:26.610 "claimed": true, 00:30:26.610 "claim_type": "read_many_write_one", 00:30:26.610 "zoned": false, 00:30:26.610 "supported_io_types": { 00:30:26.610 "read": true, 00:30:26.610 "write": true, 00:30:26.611 "unmap": true, 00:30:26.611 "flush": true, 00:30:26.611 "reset": true, 00:30:26.611 "nvme_admin": true, 00:30:26.611 "nvme_io": true, 00:30:26.611 "nvme_io_md": false, 00:30:26.611 "write_zeroes": true, 00:30:26.611 "zcopy": false, 00:30:26.611 "get_zone_info": false, 00:30:26.611 "zone_management": false, 00:30:26.611 "zone_append": false, 00:30:26.611 "compare": true, 00:30:26.611 "compare_and_write": false, 00:30:26.611 "abort": true, 00:30:26.611 "seek_hole": false, 00:30:26.611 "seek_data": false, 00:30:26.611 "copy": true, 00:30:26.611 "nvme_iov_md": false 00:30:26.611 }, 00:30:26.611 "driver_specific": { 00:30:26.611 "nvme": [ 00:30:26.611 { 00:30:26.611 "pci_address": "0000:00:11.0", 00:30:26.611 "trid": { 00:30:26.611 "trtype": "PCIe", 00:30:26.611 "traddr": "0000:00:11.0" 00:30:26.611 }, 00:30:26.611 "ctrlr_data": { 00:30:26.611 "cntlid": 0, 00:30:26.611 "vendor_id": "0x1b36", 00:30:26.611 "model_number": "QEMU NVMe Ctrl", 00:30:26.611 "serial_number": "12341", 00:30:26.611 "firmware_revision": "8.0.0", 00:30:26.611 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:26.611 "oacs": { 00:30:26.611 "security": 0, 00:30:26.611 "format": 1, 00:30:26.611 "firmware": 0, 00:30:26.611 "ns_manage": 1 00:30:26.611 }, 00:30:26.611 "multi_ctrlr": false, 00:30:26.611 "ana_reporting": false 00:30:26.611 }, 00:30:26.611 "vs": { 00:30:26.611 "nvme_version": "1.4" 00:30:26.611 }, 00:30:26.611 "ns_data": { 00:30:26.611 "id": 1, 00:30:26.611 "can_share": false 00:30:26.611 } 00:30:26.611 } 00:30:26.611 ], 00:30:26.611 "mp_policy": "active_passive" 00:30:26.611 } 00:30:26.611 } 00:30:26.611 ]' 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:26.611 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:26.871 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=c47206db-c690-45c9-b7a4-3d7848b3d35c 00:30:26.871 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:26.871 00:04:14 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c47206db-c690-45c9-b7a4-3d7848b3d35c 00:30:27.131 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ff582a7e-0643-4fc5-b08b-2b9193323c86 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ff582a7e-0643-4fc5-b08b-2b9193323c86 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:27.389 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:27.649 { 00:30:27.649 "name": "cfbe7063-475c-4572-b39a-583e78b3af0e", 00:30:27.649 "aliases": [ 00:30:27.649 "lvs/nvme0n1p0" 00:30:27.649 ], 00:30:27.649 "product_name": "Logical Volume", 00:30:27.649 "block_size": 4096, 00:30:27.649 "num_blocks": 26476544, 00:30:27.649 "uuid": "cfbe7063-475c-4572-b39a-583e78b3af0e", 00:30:27.649 "assigned_rate_limits": { 00:30:27.649 "rw_ios_per_sec": 0, 00:30:27.649 "rw_mbytes_per_sec": 0, 00:30:27.649 "r_mbytes_per_sec": 0, 00:30:27.649 "w_mbytes_per_sec": 0 00:30:27.649 }, 00:30:27.649 "claimed": false, 00:30:27.649 "zoned": false, 00:30:27.649 "supported_io_types": { 00:30:27.649 "read": true, 00:30:27.649 "write": true, 00:30:27.649 "unmap": true, 00:30:27.649 "flush": false, 00:30:27.649 "reset": true, 00:30:27.649 "nvme_admin": false, 00:30:27.649 "nvme_io": false, 00:30:27.649 "nvme_io_md": false, 00:30:27.649 "write_zeroes": true, 00:30:27.649 "zcopy": false, 00:30:27.649 "get_zone_info": false, 00:30:27.649 "zone_management": false, 00:30:27.649 "zone_append": false, 00:30:27.649 "compare": false, 00:30:27.649 "compare_and_write": false, 00:30:27.649 "abort": false, 00:30:27.649 "seek_hole": true, 00:30:27.649 "seek_data": true, 00:30:27.649 "copy": false, 00:30:27.649 "nvme_iov_md": false 00:30:27.649 }, 00:30:27.649 "driver_specific": { 00:30:27.649 "lvol": { 00:30:27.649 "lvol_store_uuid": "ff582a7e-0643-4fc5-b08b-2b9193323c86", 00:30:27.649 "base_bdev": "nvme0n1", 00:30:27.649 "thin_provision": true, 00:30:27.649 "num_allocated_clusters": 0, 00:30:27.649 "snapshot": false, 00:30:27.649 "clone": false, 00:30:27.649 "esnap_clone": false 00:30:27.649 } 00:30:27.649 } 00:30:27.649 } 00:30:27.649 ]' 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:27.649 00:04:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:27.910 00:04:16 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:27.910 00:04:16 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:27.910 00:04:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:27.910 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:27.910 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:27.910 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:27.910 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:27.910 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:28.171 { 00:30:28.171 "name": "cfbe7063-475c-4572-b39a-583e78b3af0e", 00:30:28.171 "aliases": [ 00:30:28.171 "lvs/nvme0n1p0" 00:30:28.171 ], 00:30:28.171 "product_name": "Logical Volume", 00:30:28.171 "block_size": 4096, 00:30:28.171 "num_blocks": 26476544, 00:30:28.171 "uuid": "cfbe7063-475c-4572-b39a-583e78b3af0e", 00:30:28.171 "assigned_rate_limits": { 00:30:28.171 "rw_ios_per_sec": 0, 00:30:28.171 "rw_mbytes_per_sec": 0, 00:30:28.171 "r_mbytes_per_sec": 0, 00:30:28.171 "w_mbytes_per_sec": 0 00:30:28.171 }, 00:30:28.171 "claimed": false, 00:30:28.171 "zoned": false, 00:30:28.171 "supported_io_types": { 00:30:28.171 "read": true, 00:30:28.171 "write": true, 00:30:28.171 "unmap": true, 00:30:28.171 "flush": false, 00:30:28.171 "reset": true, 00:30:28.171 "nvme_admin": false, 00:30:28.171 "nvme_io": false, 00:30:28.171 "nvme_io_md": false, 00:30:28.171 "write_zeroes": true, 00:30:28.171 "zcopy": false, 00:30:28.171 "get_zone_info": false, 00:30:28.171 "zone_management": false, 00:30:28.171 "zone_append": false, 00:30:28.171 "compare": false, 00:30:28.171 "compare_and_write": false, 00:30:28.171 "abort": false, 00:30:28.171 "seek_hole": true, 00:30:28.171 "seek_data": true, 00:30:28.171 "copy": false, 00:30:28.171 "nvme_iov_md": false 00:30:28.171 }, 00:30:28.171 "driver_specific": { 00:30:28.171 "lvol": { 00:30:28.171 "lvol_store_uuid": "ff582a7e-0643-4fc5-b08b-2b9193323c86", 00:30:28.171 "base_bdev": "nvme0n1", 00:30:28.171 "thin_provision": true, 00:30:28.171 "num_allocated_clusters": 0, 00:30:28.171 "snapshot": false, 00:30:28.171 "clone": false, 00:30:28.171 "esnap_clone": false 00:30:28.171 } 00:30:28.171 } 00:30:28.171 } 00:30:28.171 ]' 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:28.171 00:04:16 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:28.432 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:28.432 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:28.432 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:28.432 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:28.432 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:28.432 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:28.432 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfbe7063-475c-4572-b39a-583e78b3af0e 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:28.693 { 00:30:28.693 "name": "cfbe7063-475c-4572-b39a-583e78b3af0e", 00:30:28.693 "aliases": [ 00:30:28.693 "lvs/nvme0n1p0" 00:30:28.693 ], 00:30:28.693 "product_name": "Logical Volume", 00:30:28.693 "block_size": 4096, 00:30:28.693 "num_blocks": 26476544, 00:30:28.693 "uuid": "cfbe7063-475c-4572-b39a-583e78b3af0e", 00:30:28.693 "assigned_rate_limits": { 00:30:28.693 "rw_ios_per_sec": 0, 00:30:28.693 "rw_mbytes_per_sec": 0, 00:30:28.693 "r_mbytes_per_sec": 0, 00:30:28.693 "w_mbytes_per_sec": 0 00:30:28.693 }, 00:30:28.693 "claimed": false, 00:30:28.693 "zoned": false, 00:30:28.693 "supported_io_types": { 00:30:28.693 "read": true, 00:30:28.693 "write": true, 00:30:28.693 "unmap": true, 00:30:28.693 "flush": false, 00:30:28.693 "reset": true, 00:30:28.693 "nvme_admin": false, 00:30:28.693 "nvme_io": false, 00:30:28.693 "nvme_io_md": false, 00:30:28.693 "write_zeroes": true, 00:30:28.693 "zcopy": false, 00:30:28.693 "get_zone_info": false, 00:30:28.693 "zone_management": false, 00:30:28.693 "zone_append": false, 00:30:28.693 "compare": false, 00:30:28.693 "compare_and_write": false, 00:30:28.693 "abort": false, 00:30:28.693 "seek_hole": true, 00:30:28.693 "seek_data": true, 00:30:28.693 "copy": false, 00:30:28.693 "nvme_iov_md": false 00:30:28.693 }, 00:30:28.693 "driver_specific": { 00:30:28.693 "lvol": { 00:30:28.693 "lvol_store_uuid": "ff582a7e-0643-4fc5-b08b-2b9193323c86", 00:30:28.693 "base_bdev": "nvme0n1", 00:30:28.693 "thin_provision": true, 00:30:28.693 "num_allocated_clusters": 0, 00:30:28.693 "snapshot": false, 00:30:28.693 "clone": false, 00:30:28.693 "esnap_clone": false 00:30:28.693 } 00:30:28.693 } 00:30:28.693 } 00:30:28.693 ]' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cfbe7063-475c-4572-b39a-583e78b3af0e --l2p_dram_limit 10' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:28.693 00:04:16 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cfbe7063-475c-4572-b39a-583e78b3af0e --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:28.956 [2024-11-27 00:04:16.930284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.930326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:28.956 [2024-11-27 00:04:16.930337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:28.956 [2024-11-27 00:04:16.930344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.930383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.930395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.956 [2024-11-27 00:04:16.930401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:28.956 [2024-11-27 00:04:16.930409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.930425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:28.956 [2024-11-27 00:04:16.930599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:28.956 [2024-11-27 00:04:16.930611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.930619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.956 [2024-11-27 00:04:16.930628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:30:28.956 [2024-11-27 00:04:16.930635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.930657] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 42e104c0-c23f-4953-a6b2-7de7e0f43cd2 00:30:28.956 [2024-11-27 00:04:16.931595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.931617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:28.956 [2024-11-27 00:04:16.931627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:28.956 [2024-11-27 00:04:16.931633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.936433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.936520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.956 [2024-11-27 00:04:16.936565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:30:28.956 [2024-11-27 00:04:16.936584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.936655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.936673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.956 [2024-11-27 00:04:16.936690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:28.956 [2024-11-27 00:04:16.936705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.936763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.936782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:28.956 [2024-11-27 00:04:16.936815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:28.956 [2024-11-27 00:04:16.936861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.936892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:28.956 [2024-11-27 00:04:16.938171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.938250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.956 [2024-11-27 00:04:16.938328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:30:28.956 [2024-11-27 00:04:16.938348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.938382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.938400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:28.956 [2024-11-27 00:04:16.938415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:28.956 [2024-11-27 00:04:16.938432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.938453] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:28.956 [2024-11-27 00:04:16.938567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:28.956 [2024-11-27 00:04:16.939077] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:28.956 [2024-11-27 00:04:16.939105] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:28.956 [2024-11-27 00:04:16.939130] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:28.956 [2024-11-27 00:04:16.939156] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:28.956 [2024-11-27 00:04:16.939178] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:28.956 [2024-11-27 00:04:16.939195] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:28.956 [2024-11-27 00:04:16.939209] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:28.956 [2024-11-27 00:04:16.939224] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:28.956 [2024-11-27 00:04:16.939239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.939255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:28.956 [2024-11-27 00:04:16.939274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:30:28.956 [2024-11-27 00:04:16.939289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.939476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.956 [2024-11-27 00:04:16.939500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:28.956 [2024-11-27 00:04:16.939515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:28.956 [2024-11-27 00:04:16.939532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.956 [2024-11-27 00:04:16.939613] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:28.956 [2024-11-27 00:04:16.939632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:28.956 [2024-11-27 00:04:16.939647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.956 [2024-11-27 00:04:16.939666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.956 [2024-11-27 00:04:16.939681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:28.956 [2024-11-27 00:04:16.939697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:28.956 [2024-11-27 00:04:16.939711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:28.956 [2024-11-27 00:04:16.939727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:28.956 [2024-11-27 00:04:16.939786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:28.956 [2024-11-27 00:04:16.939815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.956 [2024-11-27 00:04:16.939830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:28.956 [2024-11-27 00:04:16.939845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:28.956 [2024-11-27 00:04:16.939860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.956 [2024-11-27 00:04:16.939877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:28.956 [2024-11-27 00:04:16.939891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:28.956 [2024-11-27 00:04:16.939908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.956 [2024-11-27 00:04:16.939921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:28.957 [2024-11-27 00:04:16.939936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:28.957 [2024-11-27 00:04:16.939983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:28.957 [2024-11-27 00:04:16.940016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.957 [2024-11-27 00:04:16.940044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:28.957 [2024-11-27 00:04:16.940060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.957 [2024-11-27 00:04:16.940088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:28.957 [2024-11-27 00:04:16.940104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.957 [2024-11-27 00:04:16.940133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:28.957 [2024-11-27 00:04:16.940182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.957 [2024-11-27 00:04:16.940247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:28.957 [2024-11-27 00:04:16.940320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.957 [2024-11-27 00:04:16.940352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:28.957 [2024-11-27 00:04:16.940367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:28.957 [2024-11-27 00:04:16.940381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.957 [2024-11-27 00:04:16.940396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:28.957 [2024-11-27 00:04:16.940411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:28.957 [2024-11-27 00:04:16.940426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:28.957 [2024-11-27 00:04:16.940509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:28.957 [2024-11-27 00:04:16.940526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940543] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:28.957 [2024-11-27 00:04:16.940563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:28.957 [2024-11-27 00:04:16.940581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.957 [2024-11-27 00:04:16.940621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.957 [2024-11-27 00:04:16.940642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:28.957 [2024-11-27 00:04:16.940656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:28.957 [2024-11-27 00:04:16.940672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:28.957 [2024-11-27 00:04:16.940687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:28.957 [2024-11-27 00:04:16.940702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:28.957 [2024-11-27 00:04:16.940737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:28.957 [2024-11-27 00:04:16.940760] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:28.957 [2024-11-27 00:04:16.940785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.957 [2024-11-27 00:04:16.940820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:28.957 [2024-11-27 00:04:16.940959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:28.957 [2024-11-27 00:04:16.940983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:28.957 [2024-11-27 00:04:16.941007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:28.957 [2024-11-27 00:04:16.941030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:28.957 [2024-11-27 00:04:16.941052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:28.957 [2024-11-27 00:04:16.941098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:28.957 [2024-11-27 00:04:16.941155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:28.957 [2024-11-27 00:04:16.941180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:28.957 [2024-11-27 00:04:16.941202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:28.957 [2024-11-27 00:04:16.941226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:28.957 [2024-11-27 00:04:16.941247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:28.957 [2024-11-27 00:04:16.941271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:28.957 [2024-11-27 00:04:16.941292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:28.957 [2024-11-27 00:04:16.941355] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:28.957 [2024-11-27 00:04:16.941379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.957 [2024-11-27 00:04:16.941404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:28.957 [2024-11-27 00:04:16.941427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:28.957 [2024-11-27 00:04:16.941451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:28.957 [2024-11-27 00:04:16.941472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:28.957 [2024-11-27 00:04:16.941531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.957 [2024-11-27 00:04:16.941733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:28.957 [2024-11-27 00:04:16.941847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.966 ms 00:30:28.957 [2024-11-27 00:04:16.941865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.957 [2024-11-27 00:04:16.941929] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:28.957 [2024-11-27 00:04:16.941995] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:32.278 [2024-11-27 00:04:20.396259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.278 [2024-11-27 00:04:20.396445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:32.278 [2024-11-27 00:04:20.396501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3454.315 ms 00:30:32.278 [2024-11-27 00:04:20.396520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.278 [2024-11-27 00:04:20.404079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.278 [2024-11-27 00:04:20.404217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:32.278 [2024-11-27 00:04:20.404266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.482 ms 00:30:32.278 [2024-11-27 00:04:20.404285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.278 [2024-11-27 00:04:20.404372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.278 [2024-11-27 00:04:20.404390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:32.278 [2024-11-27 00:04:20.404408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:32.278 [2024-11-27 00:04:20.404422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.539 [2024-11-27 00:04:20.411920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.539 [2024-11-27 00:04:20.412035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:32.539 [2024-11-27 00:04:20.412050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.448 ms 00:30:32.539 [2024-11-27 00:04:20.412058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.539 [2024-11-27 00:04:20.412082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.539 [2024-11-27 00:04:20.412089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:32.539 [2024-11-27 00:04:20.412097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:32.539 [2024-11-27 00:04:20.412106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.539 [2024-11-27 00:04:20.412398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.539 [2024-11-27 00:04:20.412412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:32.539 [2024-11-27 00:04:20.412421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:30:32.539 [2024-11-27 00:04:20.412427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.539 [2024-11-27 00:04:20.412517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.539 [2024-11-27 00:04:20.412525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:32.539 [2024-11-27 00:04:20.412533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:32.539 [2024-11-27 00:04:20.412539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.539 [2024-11-27 00:04:20.417448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.539 [2024-11-27 00:04:20.417475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:32.540 [2024-11-27 00:04:20.417484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.892 ms 00:30:32.540 [2024-11-27 00:04:20.417490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.436730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:32.540 [2024-11-27 00:04:20.439971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.440017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:32.540 [2024-11-27 00:04:20.440032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.422 ms 00:30:32.540 [2024-11-27 00:04:20.440043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.554628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.554666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:32.540 [2024-11-27 00:04:20.554675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.545 ms 00:30:32.540 [2024-11-27 00:04:20.554684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.554843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.554854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:32.540 [2024-11-27 00:04:20.554861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:30:32.540 [2024-11-27 00:04:20.554868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.558652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.558772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:32.540 [2024-11-27 00:04:20.558787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.769 ms 00:30:32.540 [2024-11-27 00:04:20.558806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.562433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.562536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:32.540 [2024-11-27 00:04:20.562592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:30:32.540 [2024-11-27 00:04:20.562610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.562862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.562892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:32.540 [2024-11-27 00:04:20.562909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:30:32.540 [2024-11-27 00:04:20.563166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.593618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.593721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:32.540 [2024-11-27 00:04:20.593764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.422 ms 00:30:32.540 [2024-11-27 00:04:20.593784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.598262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.598356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:32.540 [2024-11-27 00:04:20.598395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.421 ms 00:30:32.540 [2024-11-27 00:04:20.598418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.601965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.602055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:32.540 [2024-11-27 00:04:20.602094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.514 ms 00:30:32.540 [2024-11-27 00:04:20.602128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.606450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.606545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:32.540 [2024-11-27 00:04:20.606585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:30:32.540 [2024-11-27 00:04:20.606606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.606641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.606660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:32.540 [2024-11-27 00:04:20.606676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:32.540 [2024-11-27 00:04:20.606692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.606749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.540 [2024-11-27 00:04:20.606768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:32.540 [2024-11-27 00:04:20.606842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:32.540 [2024-11-27 00:04:20.606866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.540 [2024-11-27 00:04:20.607546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3676.936 ms, result 0 00:30:32.540 { 00:30:32.540 "name": "ftl0", 00:30:32.540 "uuid": "42e104c0-c23f-4953-a6b2-7de7e0f43cd2" 00:30:32.540 } 00:30:32.540 00:04:20 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:32.540 00:04:20 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:32.801 00:04:20 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:32.801 00:04:20 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:33.064 [2024-11-27 00:04:21.028643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.028752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:33.064 [2024-11-27 00:04:21.028769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:33.064 [2024-11-27 00:04:21.028776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.028809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:33.064 [2024-11-27 00:04:21.029203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.029220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:33.064 [2024-11-27 00:04:21.029231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:30:33.064 [2024-11-27 00:04:21.029238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.029430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.029443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:33.064 [2024-11-27 00:04:21.029452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:30:33.064 [2024-11-27 00:04:21.029460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.031892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.031913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:33.064 [2024-11-27 00:04:21.031920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:30:33.064 [2024-11-27 00:04:21.031929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.036517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.036541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:33.064 [2024-11-27 00:04:21.036550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.576 ms 00:30:33.064 [2024-11-27 00:04:21.036560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.038966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.039068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:33.064 [2024-11-27 00:04:21.039079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.356 ms 00:30:33.064 [2024-11-27 00:04:21.039086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.043634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.043666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:33.064 [2024-11-27 00:04:21.043674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.522 ms 00:30:33.064 [2024-11-27 00:04:21.043682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.043774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.043786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:33.064 [2024-11-27 00:04:21.043803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:33.064 [2024-11-27 00:04:21.043811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.046161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.046262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:33.064 [2024-11-27 00:04:21.046274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:30:33.064 [2024-11-27 00:04:21.046281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.048085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.048117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:33.064 [2024-11-27 00:04:21.048124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:30:33.064 [2024-11-27 00:04:21.048130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.050055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.050086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:33.064 [2024-11-27 00:04:21.050092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:30:33.064 [2024-11-27 00:04:21.050099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.051714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.064 [2024-11-27 00:04:21.051746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:33.064 [2024-11-27 00:04:21.051752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:30:33.064 [2024-11-27 00:04:21.051759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.064 [2024-11-27 00:04:21.051783] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:33.064 [2024-11-27 00:04:21.051808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:33.064 [2024-11-27 00:04:21.051992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:33.065 [2024-11-27 00:04:21.052483] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:33.065 [2024-11-27 00:04:21.052490] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42e104c0-c23f-4953-a6b2-7de7e0f43cd2 00:30:33.065 [2024-11-27 00:04:21.052498] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:33.065 [2024-11-27 00:04:21.052504] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:33.065 [2024-11-27 00:04:21.052511] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:33.065 [2024-11-27 00:04:21.052516] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:33.065 [2024-11-27 00:04:21.052530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:33.065 [2024-11-27 00:04:21.052538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:33.065 [2024-11-27 00:04:21.052545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:33.065 [2024-11-27 00:04:21.052549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:33.065 [2024-11-27 00:04:21.052555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:33.065 [2024-11-27 00:04:21.052560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.065 [2024-11-27 00:04:21.052568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:33.065 [2024-11-27 00:04:21.052574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:30:33.065 [2024-11-27 00:04:21.052582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.065 [2024-11-27 00:04:21.053768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.065 [2024-11-27 00:04:21.053874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:33.065 [2024-11-27 00:04:21.053887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:30:33.065 [2024-11-27 00:04:21.053895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.065 [2024-11-27 00:04:21.053961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.065 [2024-11-27 00:04:21.053970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:33.066 [2024-11-27 00:04:21.053976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:33.066 [2024-11-27 00:04:21.053983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.058535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.058566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:33.066 [2024-11-27 00:04:21.058576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.058584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.058625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.058632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:33.066 [2024-11-27 00:04:21.058638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.058646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.058691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.058701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:33.066 [2024-11-27 00:04:21.058708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.058717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.058731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.058738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:33.066 [2024-11-27 00:04:21.058746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.058754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.066943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.066979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:33.066 [2024-11-27 00:04:21.066989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.066996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.073693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.073732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:33.066 [2024-11-27 00:04:21.073740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.073748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.073984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.074003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:33.066 [2024-11-27 00:04:21.074011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.074017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.074050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.074063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:33.066 [2024-11-27 00:04:21.074069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.074077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.074144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.074154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:33.066 [2024-11-27 00:04:21.074161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.074168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.074193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.074203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:33.066 [2024-11-27 00:04:21.074209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.074217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.074248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.074258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:33.066 [2024-11-27 00:04:21.074264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.074271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.074308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.066 [2024-11-27 00:04:21.074317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:33.066 [2024-11-27 00:04:21.074323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.066 [2024-11-27 00:04:21.074330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.066 [2024-11-27 00:04:21.074429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.758 ms, result 0 00:30:33.066 true 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94854 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94854 ']' 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94854 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94854 00:30:33.066 killing process with pid 94854 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94854' 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94854 00:30:33.066 00:04:21 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94854 00:30:38.363 00:04:25 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:41.733 262144+0 records in 00:30:41.733 262144+0 records out 00:30:41.733 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.26933 s, 252 MB/s 00:30:41.733 00:04:29 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:44.275 00:04:31 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:44.275 [2024-11-27 00:04:31.925439] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:30:44.275 [2024-11-27 00:04:31.925551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95069 ] 00:30:44.275 [2024-11-27 00:04:32.068596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.275 [2024-11-27 00:04:32.088433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.275 [2024-11-27 00:04:32.178129] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:44.275 [2024-11-27 00:04:32.178197] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:44.275 [2024-11-27 00:04:32.334965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.335173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:44.275 [2024-11-27 00:04:32.335194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:44.275 [2024-11-27 00:04:32.335203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.335261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.335272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:44.275 [2024-11-27 00:04:32.335281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:44.275 [2024-11-27 00:04:32.335292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.335319] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:44.275 [2024-11-27 00:04:32.335545] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:44.275 [2024-11-27 00:04:32.335565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.335573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:44.275 [2024-11-27 00:04:32.335583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:30:44.275 [2024-11-27 00:04:32.335591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.336681] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:44.275 [2024-11-27 00:04:32.339422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.339463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:44.275 [2024-11-27 00:04:32.339473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.741 ms 00:30:44.275 [2024-11-27 00:04:32.339486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.339540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.339552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:44.275 [2024-11-27 00:04:32.339561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:44.275 [2024-11-27 00:04:32.339568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.344723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.344754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:44.275 [2024-11-27 00:04:32.344771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.101 ms 00:30:44.275 [2024-11-27 00:04:32.344778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.344875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.344885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:44.275 [2024-11-27 00:04:32.344894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:44.275 [2024-11-27 00:04:32.344903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.344939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.344949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:44.275 [2024-11-27 00:04:32.344958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:44.275 [2024-11-27 00:04:32.344968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.344989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:44.275 [2024-11-27 00:04:32.346378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.346541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:44.275 [2024-11-27 00:04:32.346556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:30:44.275 [2024-11-27 00:04:32.346564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.346593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.275 [2024-11-27 00:04:32.346607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:44.275 [2024-11-27 00:04:32.346615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:44.275 [2024-11-27 00:04:32.346625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.275 [2024-11-27 00:04:32.346644] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:44.275 [2024-11-27 00:04:32.346668] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:44.275 [2024-11-27 00:04:32.346705] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:44.275 [2024-11-27 00:04:32.346721] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:44.276 [2024-11-27 00:04:32.346840] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:44.276 [2024-11-27 00:04:32.346856] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:44.276 [2024-11-27 00:04:32.346869] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:44.276 [2024-11-27 00:04:32.346878] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:44.276 [2024-11-27 00:04:32.346891] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:44.276 [2024-11-27 00:04:32.346898] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:44.276 [2024-11-27 00:04:32.346909] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:44.276 [2024-11-27 00:04:32.346917] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:44.276 [2024-11-27 00:04:32.346923] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:44.276 [2024-11-27 00:04:32.346932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.276 [2024-11-27 00:04:32.346940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:44.276 [2024-11-27 00:04:32.346950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:30:44.276 [2024-11-27 00:04:32.346957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.276 [2024-11-27 00:04:32.347042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.276 [2024-11-27 00:04:32.347051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:44.276 [2024-11-27 00:04:32.347058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:30:44.276 [2024-11-27 00:04:32.347065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.276 [2024-11-27 00:04:32.347167] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:44.276 [2024-11-27 00:04:32.347179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:44.276 [2024-11-27 00:04:32.347189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:44.276 [2024-11-27 00:04:32.347214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:44.276 [2024-11-27 00:04:32.347241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:44.276 [2024-11-27 00:04:32.347258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:44.276 [2024-11-27 00:04:32.347267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:44.276 [2024-11-27 00:04:32.347275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:44.276 [2024-11-27 00:04:32.347283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:44.276 [2024-11-27 00:04:32.347291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:44.276 [2024-11-27 00:04:32.347299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:44.276 [2024-11-27 00:04:32.347315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:44.276 [2024-11-27 00:04:32.347339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:44.276 [2024-11-27 00:04:32.347363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:44.276 [2024-11-27 00:04:32.347385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:44.276 [2024-11-27 00:04:32.347411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:44.276 [2024-11-27 00:04:32.347435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:44.276 [2024-11-27 00:04:32.347450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:44.276 [2024-11-27 00:04:32.347456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:44.276 [2024-11-27 00:04:32.347463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:44.276 [2024-11-27 00:04:32.347469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:44.276 [2024-11-27 00:04:32.347476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:44.276 [2024-11-27 00:04:32.347482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:44.276 [2024-11-27 00:04:32.347494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:44.276 [2024-11-27 00:04:32.347501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347510] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:44.276 [2024-11-27 00:04:32.347519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:44.276 [2024-11-27 00:04:32.347526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.276 [2024-11-27 00:04:32.347545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:44.276 [2024-11-27 00:04:32.347551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:44.276 [2024-11-27 00:04:32.347557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:44.276 [2024-11-27 00:04:32.347566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:44.276 [2024-11-27 00:04:32.347572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:44.276 [2024-11-27 00:04:32.347578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:44.276 [2024-11-27 00:04:32.347587] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:44.276 [2024-11-27 00:04:32.347596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.276 [2024-11-27 00:04:32.347604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:44.276 [2024-11-27 00:04:32.347612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:44.276 [2024-11-27 00:04:32.347619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:44.276 [2024-11-27 00:04:32.347627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:44.276 [2024-11-27 00:04:32.347635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:44.276 [2024-11-27 00:04:32.347642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:44.276 [2024-11-27 00:04:32.347649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:44.276 [2024-11-27 00:04:32.347657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:44.276 [2024-11-27 00:04:32.347665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:44.276 [2024-11-27 00:04:32.347676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:44.276 [2024-11-27 00:04:32.347683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:44.276 [2024-11-27 00:04:32.347690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:44.276 [2024-11-27 00:04:32.347697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:44.276 [2024-11-27 00:04:32.347704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:44.276 [2024-11-27 00:04:32.347711] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:44.276 [2024-11-27 00:04:32.347719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.276 [2024-11-27 00:04:32.347730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:44.276 [2024-11-27 00:04:32.347738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:44.276 [2024-11-27 00:04:32.347745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:44.276 [2024-11-27 00:04:32.347752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:44.276 [2024-11-27 00:04:32.347761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.276 [2024-11-27 00:04:32.347769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:44.276 [2024-11-27 00:04:32.347776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:30:44.276 [2024-11-27 00:04:32.347785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.276 [2024-11-27 00:04:32.357356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.276 [2024-11-27 00:04:32.357494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:44.276 [2024-11-27 00:04:32.357510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.518 ms 00:30:44.277 [2024-11-27 00:04:32.357518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-27 00:04:32.357598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-27 00:04:32.357606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:44.277 [2024-11-27 00:04:32.357614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:44.277 [2024-11-27 00:04:32.357621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-27 00:04:32.383673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-27 00:04:32.383714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:44.277 [2024-11-27 00:04:32.383727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.002 ms 00:30:44.277 [2024-11-27 00:04:32.383735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-27 00:04:32.383777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-27 00:04:32.383805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:44.277 [2024-11-27 00:04:32.383814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:44.277 [2024-11-27 00:04:32.383822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-27 00:04:32.384208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-27 00:04:32.384225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:44.277 [2024-11-27 00:04:32.384235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:30:44.277 [2024-11-27 00:04:32.384243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-27 00:04:32.384377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-27 00:04:32.384396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:44.277 [2024-11-27 00:04:32.384405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:30:44.277 [2024-11-27 00:04:32.384414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-27 00:04:32.390061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-27 00:04:32.390216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:44.277 [2024-11-27 00:04:32.390231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.628 ms 00:30:44.277 [2024-11-27 00:04:32.390240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-27 00:04:32.393013] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:44.277 [2024-11-27 00:04:32.393052] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:44.277 [2024-11-27 00:04:32.393064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-27 00:04:32.393072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:44.277 [2024-11-27 00:04:32.393080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:30:44.277 [2024-11-27 00:04:32.393087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.408080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.408228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:44.538 [2024-11-27 00:04:32.408246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.954 ms 00:30:44.538 [2024-11-27 00:04:32.408254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.410731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.410763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:44.538 [2024-11-27 00:04:32.410773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:30:44.538 [2024-11-27 00:04:32.410780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.412695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.412728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:44.538 [2024-11-27 00:04:32.412737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:30:44.538 [2024-11-27 00:04:32.412744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.413081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.413095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:44.538 [2024-11-27 00:04:32.413104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:30:44.538 [2024-11-27 00:04:32.413111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.432606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.432651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:44.538 [2024-11-27 00:04:32.432662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.479 ms 00:30:44.538 [2024-11-27 00:04:32.432669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.440266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:44.538 [2024-11-27 00:04:32.442764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.442926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:44.538 [2024-11-27 00:04:32.442947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.055 ms 00:30:44.538 [2024-11-27 00:04:32.442956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.443042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.443054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:44.538 [2024-11-27 00:04:32.443064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:44.538 [2024-11-27 00:04:32.443078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.443139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.443148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:44.538 [2024-11-27 00:04:32.443157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:44.538 [2024-11-27 00:04:32.443167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.443185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.443194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:44.538 [2024-11-27 00:04:32.443203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:44.538 [2024-11-27 00:04:32.443210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.443240] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:44.538 [2024-11-27 00:04:32.443252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.443259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:44.538 [2024-11-27 00:04:32.443267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:44.538 [2024-11-27 00:04:32.443276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.447365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.447400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:44.538 [2024-11-27 00:04:32.447409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:30:44.538 [2024-11-27 00:04:32.447418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.447493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.538 [2024-11-27 00:04:32.447503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:44.538 [2024-11-27 00:04:32.447512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:44.538 [2024-11-27 00:04:32.447522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.538 [2024-11-27 00:04:32.448472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.111 ms, result 0 00:30:45.480  [2024-11-27T00:04:34.554Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-27T00:04:35.495Z] Copying: 41/1024 [MB] (20 MBps) [2024-11-27T00:04:36.878Z] Copying: 64/1024 [MB] (23 MBps) [2024-11-27T00:04:37.824Z] Copying: 89/1024 [MB] (24 MBps) [2024-11-27T00:04:38.768Z] Copying: 111/1024 [MB] (21 MBps) [2024-11-27T00:04:39.713Z] Copying: 135/1024 [MB] (23 MBps) [2024-11-27T00:04:40.658Z] Copying: 152/1024 [MB] (17 MBps) [2024-11-27T00:04:41.604Z] Copying: 172/1024 [MB] (19 MBps) [2024-11-27T00:04:42.549Z] Copying: 193/1024 [MB] (20 MBps) [2024-11-27T00:04:43.489Z] Copying: 213/1024 [MB] (20 MBps) [2024-11-27T00:04:44.873Z] Copying: 234/1024 [MB] (21 MBps) [2024-11-27T00:04:45.816Z] Copying: 252/1024 [MB] (17 MBps) [2024-11-27T00:04:46.761Z] Copying: 266/1024 [MB] (14 MBps) [2024-11-27T00:04:47.705Z] Copying: 277/1024 [MB] (11 MBps) [2024-11-27T00:04:48.650Z] Copying: 296/1024 [MB] (18 MBps) [2024-11-27T00:04:49.591Z] Copying: 313/1024 [MB] (17 MBps) [2024-11-27T00:04:50.538Z] Copying: 324/1024 [MB] (11 MBps) [2024-11-27T00:04:51.485Z] Copying: 335/1024 [MB] (10 MBps) [2024-11-27T00:04:52.875Z] Copying: 346/1024 [MB] (10 MBps) [2024-11-27T00:04:53.819Z] Copying: 364380/1048576 [kB] (10076 kBps) [2024-11-27T00:04:54.764Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-27T00:04:55.710Z] Copying: 378/1024 [MB] (11 MBps) [2024-11-27T00:04:56.655Z] Copying: 388/1024 [MB] (10 MBps) [2024-11-27T00:04:57.601Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-27T00:04:58.547Z] Copying: 411/1024 [MB] (11 MBps) [2024-11-27T00:04:59.547Z] Copying: 421/1024 [MB] (10 MBps) [2024-11-27T00:05:00.493Z] Copying: 441936/1048576 [kB] (10112 kBps) [2024-11-27T00:05:01.880Z] Copying: 452168/1048576 [kB] (10232 kBps) [2024-11-27T00:05:02.825Z] Copying: 453/1024 [MB] (11 MBps) [2024-11-27T00:05:03.766Z] Copying: 464/1024 [MB] (11 MBps) [2024-11-27T00:05:04.710Z] Copying: 476/1024 [MB] (11 MBps) [2024-11-27T00:05:05.654Z] Copying: 487/1024 [MB] (11 MBps) [2024-11-27T00:05:06.598Z] Copying: 497/1024 [MB] (10 MBps) [2024-11-27T00:05:07.541Z] Copying: 509/1024 [MB] (11 MBps) [2024-11-27T00:05:08.483Z] Copying: 521/1024 [MB] (11 MBps) [2024-11-27T00:05:09.874Z] Copying: 532/1024 [MB] (11 MBps) [2024-11-27T00:05:10.820Z] Copying: 543/1024 [MB] (11 MBps) [2024-11-27T00:05:11.766Z] Copying: 555/1024 [MB] (11 MBps) [2024-11-27T00:05:12.710Z] Copying: 566/1024 [MB] (11 MBps) [2024-11-27T00:05:13.655Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-27T00:05:14.601Z] Copying: 589/1024 [MB] (11 MBps) [2024-11-27T00:05:15.547Z] Copying: 600/1024 [MB] (11 MBps) [2024-11-27T00:05:16.492Z] Copying: 611/1024 [MB] (10 MBps) [2024-11-27T00:05:17.881Z] Copying: 622/1024 [MB] (11 MBps) [2024-11-27T00:05:18.825Z] Copying: 633/1024 [MB] (11 MBps) [2024-11-27T00:05:19.766Z] Copying: 645/1024 [MB] (11 MBps) [2024-11-27T00:05:20.710Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-27T00:05:21.654Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-27T00:05:22.600Z] Copying: 679/1024 [MB] (11 MBps) [2024-11-27T00:05:23.544Z] Copying: 690/1024 [MB] (11 MBps) [2024-11-27T00:05:24.491Z] Copying: 701/1024 [MB] (11 MBps) [2024-11-27T00:05:25.880Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-27T00:05:26.825Z] Copying: 724/1024 [MB] (11 MBps) [2024-11-27T00:05:27.768Z] Copying: 735/1024 [MB] (10 MBps) [2024-11-27T00:05:28.807Z] Copying: 746/1024 [MB] (11 MBps) [2024-11-27T00:05:29.751Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-27T00:05:30.698Z] Copying: 769/1024 [MB] (11 MBps) [2024-11-27T00:05:31.643Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-27T00:05:32.588Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-27T00:05:33.534Z] Copying: 803/1024 [MB] (11 MBps) [2024-11-27T00:05:34.480Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-27T00:05:35.870Z] Copying: 826/1024 [MB] (11 MBps) [2024-11-27T00:05:36.816Z] Copying: 837/1024 [MB] (11 MBps) [2024-11-27T00:05:37.763Z] Copying: 848/1024 [MB] (10 MBps) [2024-11-27T00:05:38.708Z] Copying: 860/1024 [MB] (11 MBps) [2024-11-27T00:05:39.654Z] Copying: 871/1024 [MB] (11 MBps) [2024-11-27T00:05:40.598Z] Copying: 883/1024 [MB] (11 MBps) [2024-11-27T00:05:41.542Z] Copying: 894/1024 [MB] (11 MBps) [2024-11-27T00:05:42.486Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-27T00:05:43.875Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-27T00:05:44.821Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-27T00:05:45.765Z] Copying: 939/1024 [MB] (10 MBps) [2024-11-27T00:05:46.710Z] Copying: 972288/1048576 [kB] (10192 kBps) [2024-11-27T00:05:47.654Z] Copying: 982432/1048576 [kB] (10144 kBps) [2024-11-27T00:05:48.598Z] Copying: 992360/1048576 [kB] (9928 kBps) [2024-11-27T00:05:49.541Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-27T00:05:50.484Z] Copying: 990/1024 [MB] (11 MBps) [2024-11-27T00:05:51.873Z] Copying: 1001/1024 [MB] (11 MBps) [2024-11-27T00:05:52.448Z] Copying: 1013/1024 [MB] (11 MBps) [2024-11-27T00:05:52.448Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-27 00:05:52.432850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.317 [2024-11-27 00:05:52.432894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.317 [2024-11-27 00:05:52.432905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:04.317 [2024-11-27 00:05:52.432915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.317 [2024-11-27 00:05:52.432930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.317 [2024-11-27 00:05:52.433307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.317 [2024-11-27 00:05:52.433329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.317 [2024-11-27 00:05:52.433336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:32:04.317 [2024-11-27 00:05:52.433344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.317 [2024-11-27 00:05:52.435470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.317 [2024-11-27 00:05:52.435498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.317 [2024-11-27 00:05:52.435505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:32:04.317 [2024-11-27 00:05:52.435512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.317 [2024-11-27 00:05:52.435540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.317 [2024-11-27 00:05:52.435551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:04.317 [2024-11-27 00:05:52.435557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:04.317 [2024-11-27 00:05:52.435563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.317 [2024-11-27 00:05:52.435598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.317 [2024-11-27 00:05:52.435604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:04.317 [2024-11-27 00:05:52.435610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:04.317 [2024-11-27 00:05:52.435615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.317 [2024-11-27 00:05:52.435625] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.317 [2024-11-27 00:05:52.435636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.317 [2024-11-27 00:05:52.435740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.435999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.318 [2024-11-27 00:05:52.436222] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.318 [2024-11-27 00:05:52.436232] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42e104c0-c23f-4953-a6b2-7de7e0f43cd2 00:32:04.318 [2024-11-27 00:05:52.436238] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:04.318 [2024-11-27 00:05:52.436246] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:04.318 [2024-11-27 00:05:52.436252] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:04.318 [2024-11-27 00:05:52.436259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:04.318 [2024-11-27 00:05:52.436264] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.318 [2024-11-27 00:05:52.436270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.319 [2024-11-27 00:05:52.436275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.319 [2024-11-27 00:05:52.436280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.319 [2024-11-27 00:05:52.436285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.319 [2024-11-27 00:05:52.436290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.319 [2024-11-27 00:05:52.436296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.319 [2024-11-27 00:05:52.436303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:32:04.319 [2024-11-27 00:05:52.436308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-27 00:05:52.437553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.319 [2024-11-27 00:05:52.437568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.319 [2024-11-27 00:05:52.437575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:32:04.319 [2024-11-27 00:05:52.437585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-27 00:05:52.437651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.319 [2024-11-27 00:05:52.437661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.319 [2024-11-27 00:05:52.437667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:04.319 [2024-11-27 00:05:52.437673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-27 00:05:52.442071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.319 [2024-11-27 00:05:52.442195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.319 [2024-11-27 00:05:52.442237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.319 [2024-11-27 00:05:52.442254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-27 00:05:52.442309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.319 [2024-11-27 00:05:52.442330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.319 [2024-11-27 00:05:52.442345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.319 [2024-11-27 00:05:52.442359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-27 00:05:52.442407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.319 [2024-11-27 00:05:52.442425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.319 [2024-11-27 00:05:52.442442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.319 [2024-11-27 00:05:52.442484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-27 00:05:52.442563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.319 [2024-11-27 00:05:52.442582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.319 [2024-11-27 00:05:52.442620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.319 [2024-11-27 00:05:52.442637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.450485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.581 [2024-11-27 00:05:52.450610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.581 [2024-11-27 00:05:52.450647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.581 [2024-11-27 00:05:52.450665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.456787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.581 [2024-11-27 00:05:52.456911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.581 [2024-11-27 00:05:52.456958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.581 [2024-11-27 00:05:52.456976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.457004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.581 [2024-11-27 00:05:52.457020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.581 [2024-11-27 00:05:52.457035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.581 [2024-11-27 00:05:52.457050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.457093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.581 [2024-11-27 00:05:52.457110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.581 [2024-11-27 00:05:52.457126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.581 [2024-11-27 00:05:52.457170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.457226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.581 [2024-11-27 00:05:52.457244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.581 [2024-11-27 00:05:52.457260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.581 [2024-11-27 00:05:52.457274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.457366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.581 [2024-11-27 00:05:52.457387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.581 [2024-11-27 00:05:52.457403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.581 [2024-11-27 00:05:52.457423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.457462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.581 [2024-11-27 00:05:52.457478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.581 [2024-11-27 00:05:52.457519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.581 [2024-11-27 00:05:52.457535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.457577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.581 [2024-11-27 00:05:52.457596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.581 [2024-11-27 00:05:52.457611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.581 [2024-11-27 00:05:52.457628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.581 [2024-11-27 00:05:52.457764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 24.890 ms, result 0 00:32:04.581 00:32:04.581 00:32:04.581 00:05:52 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:04.842 [2024-11-27 00:05:52.737976] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:32:04.842 [2024-11-27 00:05:52.738250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95883 ] 00:32:04.842 [2024-11-27 00:05:52.880185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.842 [2024-11-27 00:05:52.896587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.106 [2024-11-27 00:05:52.979669] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:05.106 [2024-11-27 00:05:52.979877] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:05.106 [2024-11-27 00:05:53.128870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.128996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:05.106 [2024-11-27 00:05:53.129044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:05.106 [2024-11-27 00:05:53.129062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.129122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.129142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:05.106 [2024-11-27 00:05:53.129159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:05.106 [2024-11-27 00:05:53.129180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.129218] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:05.106 [2024-11-27 00:05:53.129422] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:05.106 [2024-11-27 00:05:53.129463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.129478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:05.106 [2024-11-27 00:05:53.129497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:32:05.106 [2024-11-27 00:05:53.129511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.129808] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:05.106 [2024-11-27 00:05:53.130089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.130107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:05.106 [2024-11-27 00:05:53.130115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:32:05.106 [2024-11-27 00:05:53.130126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.130189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.130198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:05.106 [2024-11-27 00:05:53.130205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:05.106 [2024-11-27 00:05:53.130215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.130407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.130416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:05.106 [2024-11-27 00:05:53.130426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:32:05.106 [2024-11-27 00:05:53.130431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.130492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.130500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:05.106 [2024-11-27 00:05:53.130507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:32:05.106 [2024-11-27 00:05:53.130512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.130531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.130538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:05.106 [2024-11-27 00:05:53.130544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:05.106 [2024-11-27 00:05:53.130550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.130564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:05.106 [2024-11-27 00:05:53.131919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.132000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:05.106 [2024-11-27 00:05:53.132040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:32:05.106 [2024-11-27 00:05:53.132057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.132098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.106 [2024-11-27 00:05:53.132115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:05.106 [2024-11-27 00:05:53.132131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:05.106 [2024-11-27 00:05:53.132145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.106 [2024-11-27 00:05:53.132202] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:05.106 [2024-11-27 00:05:53.132231] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:05.106 [2024-11-27 00:05:53.132280] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:05.106 [2024-11-27 00:05:53.132310] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:05.106 [2024-11-27 00:05:53.132433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:05.106 [2024-11-27 00:05:53.132459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:05.106 [2024-11-27 00:05:53.132484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:05.106 [2024-11-27 00:05:53.132540] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:05.106 [2024-11-27 00:05:53.132571] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:05.106 [2024-11-27 00:05:53.132594] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:05.106 [2024-11-27 00:05:53.132609] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:05.106 [2024-11-27 00:05:53.132646] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:05.106 [2024-11-27 00:05:53.132663] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:05.106 [2024-11-27 00:05:53.132679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.107 [2024-11-27 00:05:53.132718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:05.107 [2024-11-27 00:05:53.132727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:32:05.107 [2024-11-27 00:05:53.132737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.107 [2024-11-27 00:05:53.132820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.107 [2024-11-27 00:05:53.132829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:05.107 [2024-11-27 00:05:53.132837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:05.107 [2024-11-27 00:05:53.132843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.107 [2024-11-27 00:05:53.132933] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:05.107 [2024-11-27 00:05:53.132944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:05.107 [2024-11-27 00:05:53.132950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:05.107 [2024-11-27 00:05:53.132960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.107 [2024-11-27 00:05:53.132966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:05.107 [2024-11-27 00:05:53.132971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:05.107 [2024-11-27 00:05:53.132977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:05.107 [2024-11-27 00:05:53.132982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:05.107 [2024-11-27 00:05:53.132987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:05.107 [2024-11-27 00:05:53.132992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:05.107 [2024-11-27 00:05:53.132997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:05.107 [2024-11-27 00:05:53.133002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:05.107 [2024-11-27 00:05:53.133008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:05.107 [2024-11-27 00:05:53.133014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:05.107 [2024-11-27 00:05:53.133020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:05.107 [2024-11-27 00:05:53.133025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:05.107 [2024-11-27 00:05:53.133035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:05.107 [2024-11-27 00:05:53.133041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:05.107 [2024-11-27 00:05:53.133053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.107 [2024-11-27 00:05:53.133063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:05.107 [2024-11-27 00:05:53.133068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.107 [2024-11-27 00:05:53.133078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:05.107 [2024-11-27 00:05:53.133083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.107 [2024-11-27 00:05:53.133093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:05.107 [2024-11-27 00:05:53.133098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.107 [2024-11-27 00:05:53.133108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:05.107 [2024-11-27 00:05:53.133113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:05.107 [2024-11-27 00:05:53.133123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:05.107 [2024-11-27 00:05:53.133131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:05.107 [2024-11-27 00:05:53.133136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:05.107 [2024-11-27 00:05:53.133141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:05.107 [2024-11-27 00:05:53.133146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:05.107 [2024-11-27 00:05:53.133151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:05.107 [2024-11-27 00:05:53.133161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:05.107 [2024-11-27 00:05:53.133166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133171] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:05.107 [2024-11-27 00:05:53.133176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:05.107 [2024-11-27 00:05:53.133184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:05.107 [2024-11-27 00:05:53.133191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.107 [2024-11-27 00:05:53.133196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:05.107 [2024-11-27 00:05:53.133201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:05.107 [2024-11-27 00:05:53.133206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:05.107 [2024-11-27 00:05:53.133212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:05.107 [2024-11-27 00:05:53.133219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:05.107 [2024-11-27 00:05:53.133225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:05.107 [2024-11-27 00:05:53.133231] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:05.107 [2024-11-27 00:05:53.133238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:05.107 [2024-11-27 00:05:53.133245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:05.107 [2024-11-27 00:05:53.133250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:05.107 [2024-11-27 00:05:53.133256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:05.107 [2024-11-27 00:05:53.133261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:05.107 [2024-11-27 00:05:53.133267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:05.107 [2024-11-27 00:05:53.133272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:05.107 [2024-11-27 00:05:53.133278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:05.107 [2024-11-27 00:05:53.133284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:05.107 [2024-11-27 00:05:53.133289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:05.107 [2024-11-27 00:05:53.133295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:05.107 [2024-11-27 00:05:53.133301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:05.107 [2024-11-27 00:05:53.133310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:05.107 [2024-11-27 00:05:53.133317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:05.107 [2024-11-27 00:05:53.133322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:05.107 [2024-11-27 00:05:53.133327] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:05.107 [2024-11-27 00:05:53.133336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:05.107 [2024-11-27 00:05:53.133342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:05.107 [2024-11-27 00:05:53.133348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:05.107 [2024-11-27 00:05:53.133354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:05.107 [2024-11-27 00:05:53.133361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:05.107 [2024-11-27 00:05:53.133368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.107 [2024-11-27 00:05:53.133374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:05.107 [2024-11-27 00:05:53.133380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:32:05.107 [2024-11-27 00:05:53.133387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.107 [2024-11-27 00:05:53.138909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.107 [2024-11-27 00:05:53.138929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:05.107 [2024-11-27 00:05:53.138937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.491 ms 00:32:05.107 [2024-11-27 00:05:53.138947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.107 [2024-11-27 00:05:53.139008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.107 [2024-11-27 00:05:53.139015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:05.107 [2024-11-27 00:05:53.139024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:32:05.107 [2024-11-27 00:05:53.139030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.107 [2024-11-27 00:05:53.162341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.107 [2024-11-27 00:05:53.162424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:05.107 [2024-11-27 00:05:53.162455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.266 ms 00:32:05.107 [2024-11-27 00:05:53.162478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.107 [2024-11-27 00:05:53.162566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.107 [2024-11-27 00:05:53.162592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:05.108 [2024-11-27 00:05:53.162614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:05.108 [2024-11-27 00:05:53.162634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.162908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.162970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:05.108 [2024-11-27 00:05:53.162996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:32:05.108 [2024-11-27 00:05:53.163020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.163328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.163371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:05.108 [2024-11-27 00:05:53.163394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:32:05.108 [2024-11-27 00:05:53.163417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.169530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.169555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:05.108 [2024-11-27 00:05:53.169566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.070 ms 00:32:05.108 [2024-11-27 00:05:53.169573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.169647] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:05.108 [2024-11-27 00:05:53.169657] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:05.108 [2024-11-27 00:05:53.169663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.169669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:05.108 [2024-11-27 00:05:53.169675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:05.108 [2024-11-27 00:05:53.169683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.178828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.178859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:05.108 [2024-11-27 00:05:53.178867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.133 ms 00:32:05.108 [2024-11-27 00:05:53.178872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.178959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.178969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:05.108 [2024-11-27 00:05:53.178977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:05.108 [2024-11-27 00:05:53.178985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.179018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.179027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:05.108 [2024-11-27 00:05:53.179033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:05.108 [2024-11-27 00:05:53.179039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.179257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.179265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:05.108 [2024-11-27 00:05:53.179272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:32:05.108 [2024-11-27 00:05:53.179277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.179296] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:05.108 [2024-11-27 00:05:53.179306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.179314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:05.108 [2024-11-27 00:05:53.179320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:05.108 [2024-11-27 00:05:53.179326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.185542] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:05.108 [2024-11-27 00:05:53.185643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.185650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:05.108 [2024-11-27 00:05:53.185657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.302 ms 00:32:05.108 [2024-11-27 00:05:53.185662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.187406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.187426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:05.108 [2024-11-27 00:05:53.187434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:32:05.108 [2024-11-27 00:05:53.187440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.187496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.187503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:05.108 [2024-11-27 00:05:53.187509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:05.108 [2024-11-27 00:05:53.187517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.187532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.187538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:05.108 [2024-11-27 00:05:53.187544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:05.108 [2024-11-27 00:05:53.187550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.187571] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:05.108 [2024-11-27 00:05:53.187579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.187584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:05.108 [2024-11-27 00:05:53.187592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:05.108 [2024-11-27 00:05:53.187597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.191599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.191730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:05.108 [2024-11-27 00:05:53.191743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.986 ms 00:32:05.108 [2024-11-27 00:05:53.191749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.191812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.108 [2024-11-27 00:05:53.191824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:05.108 [2024-11-27 00:05:53.191830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:05.108 [2024-11-27 00:05:53.191837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.108 [2024-11-27 00:05:53.192508] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 63.337 ms, result 0 00:32:06.498  [2024-11-27T00:05:55.577Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T00:05:56.522Z] Copying: 24/1024 [MB] (13 MBps) [2024-11-27T00:05:57.485Z] Copying: 36/1024 [MB] (11 MBps) [2024-11-27T00:05:58.429Z] Copying: 47/1024 [MB] (11 MBps) [2024-11-27T00:05:59.374Z] Copying: 58/1024 [MB] (10 MBps) [2024-11-27T00:06:00.763Z] Copying: 69/1024 [MB] (10 MBps) [2024-11-27T00:06:01.337Z] Copying: 81/1024 [MB] (11 MBps) [2024-11-27T00:06:02.727Z] Copying: 93/1024 [MB] (12 MBps) [2024-11-27T00:06:03.672Z] Copying: 104/1024 [MB] (11 MBps) [2024-11-27T00:06:04.617Z] Copying: 115/1024 [MB] (11 MBps) [2024-11-27T00:06:05.561Z] Copying: 126/1024 [MB] (10 MBps) [2024-11-27T00:06:06.503Z] Copying: 137/1024 [MB] (10 MBps) [2024-11-27T00:06:07.448Z] Copying: 148/1024 [MB] (11 MBps) [2024-11-27T00:06:08.392Z] Copying: 160/1024 [MB] (11 MBps) [2024-11-27T00:06:09.336Z] Copying: 172/1024 [MB] (12 MBps) [2024-11-27T00:06:10.723Z] Copying: 185/1024 [MB] (12 MBps) [2024-11-27T00:06:11.668Z] Copying: 197/1024 [MB] (12 MBps) [2024-11-27T00:06:12.613Z] Copying: 209/1024 [MB] (11 MBps) [2024-11-27T00:06:13.557Z] Copying: 220/1024 [MB] (11 MBps) [2024-11-27T00:06:14.499Z] Copying: 231/1024 [MB] (11 MBps) [2024-11-27T00:06:15.442Z] Copying: 243/1024 [MB] (11 MBps) [2024-11-27T00:06:16.386Z] Copying: 255/1024 [MB] (11 MBps) [2024-11-27T00:06:17.328Z] Copying: 266/1024 [MB] (11 MBps) [2024-11-27T00:06:18.714Z] Copying: 278/1024 [MB] (11 MBps) [2024-11-27T00:06:19.659Z] Copying: 289/1024 [MB] (10 MBps) [2024-11-27T00:06:20.605Z] Copying: 300/1024 [MB] (11 MBps) [2024-11-27T00:06:21.550Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-27T00:06:22.496Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-27T00:06:23.442Z] Copying: 334/1024 [MB] (10 MBps) [2024-11-27T00:06:24.387Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-27T00:06:25.332Z] Copying: 356/1024 [MB] (11 MBps) [2024-11-27T00:06:26.368Z] Copying: 368/1024 [MB] (11 MBps) [2024-11-27T00:06:27.756Z] Copying: 379/1024 [MB] (11 MBps) [2024-11-27T00:06:28.330Z] Copying: 393/1024 [MB] (14 MBps) [2024-11-27T00:06:29.728Z] Copying: 404/1024 [MB] (10 MBps) [2024-11-27T00:06:30.673Z] Copying: 415/1024 [MB] (10 MBps) [2024-11-27T00:06:31.617Z] Copying: 426/1024 [MB] (10 MBps) [2024-11-27T00:06:32.561Z] Copying: 436/1024 [MB] (10 MBps) [2024-11-27T00:06:33.505Z] Copying: 447/1024 [MB] (10 MBps) [2024-11-27T00:06:34.448Z] Copying: 458/1024 [MB] (11 MBps) [2024-11-27T00:06:35.392Z] Copying: 470/1024 [MB] (11 MBps) [2024-11-27T00:06:36.338Z] Copying: 481/1024 [MB] (10 MBps) [2024-11-27T00:06:37.745Z] Copying: 492/1024 [MB] (11 MBps) [2024-11-27T00:06:38.690Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-27T00:06:39.635Z] Copying: 515/1024 [MB] (11 MBps) [2024-11-27T00:06:40.578Z] Copying: 527/1024 [MB] (11 MBps) [2024-11-27T00:06:41.525Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-27T00:06:42.469Z] Copying: 550/1024 [MB] (11 MBps) [2024-11-27T00:06:43.415Z] Copying: 562/1024 [MB] (11 MBps) [2024-11-27T00:06:44.359Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-27T00:06:45.747Z] Copying: 583/1024 [MB] (11 MBps) [2024-11-27T00:06:46.691Z] Copying: 595/1024 [MB] (11 MBps) [2024-11-27T00:06:47.636Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-27T00:06:48.579Z] Copying: 619/1024 [MB] (11 MBps) [2024-11-27T00:06:49.519Z] Copying: 630/1024 [MB] (11 MBps) [2024-11-27T00:06:50.462Z] Copying: 642/1024 [MB] (11 MBps) [2024-11-27T00:06:51.409Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-27T00:06:52.354Z] Copying: 664/1024 [MB] (10 MBps) [2024-11-27T00:06:53.743Z] Copying: 676/1024 [MB] (11 MBps) [2024-11-27T00:06:54.690Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-27T00:06:55.676Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-27T00:06:56.620Z] Copying: 711/1024 [MB] (12 MBps) [2024-11-27T00:06:57.564Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-27T00:06:58.509Z] Copying: 734/1024 [MB] (11 MBps) [2024-11-27T00:06:59.453Z] Copying: 748/1024 [MB] (14 MBps) [2024-11-27T00:07:00.397Z] Copying: 758/1024 [MB] (10 MBps) [2024-11-27T00:07:01.341Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-27T00:07:02.731Z] Copying: 781/1024 [MB] (11 MBps) [2024-11-27T00:07:03.676Z] Copying: 791/1024 [MB] (10 MBps) [2024-11-27T00:07:04.629Z] Copying: 803/1024 [MB] (11 MBps) [2024-11-27T00:07:05.574Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-27T00:07:06.518Z] Copying: 826/1024 [MB] (11 MBps) [2024-11-27T00:07:07.464Z] Copying: 838/1024 [MB] (12 MBps) [2024-11-27T00:07:08.410Z] Copying: 850/1024 [MB] (12 MBps) [2024-11-27T00:07:09.379Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-27T00:07:10.335Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-27T00:07:11.722Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-27T00:07:12.666Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-27T00:07:13.612Z] Copying: 907/1024 [MB] (11 MBps) [2024-11-27T00:07:14.556Z] Copying: 919/1024 [MB] (12 MBps) [2024-11-27T00:07:15.501Z] Copying: 931/1024 [MB] (12 MBps) [2024-11-27T00:07:16.445Z] Copying: 943/1024 [MB] (11 MBps) [2024-11-27T00:07:17.388Z] Copying: 953/1024 [MB] (10 MBps) [2024-11-27T00:07:18.333Z] Copying: 965/1024 [MB] (11 MBps) [2024-11-27T00:07:19.722Z] Copying: 976/1024 [MB] (11 MBps) [2024-11-27T00:07:20.665Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-27T00:07:21.607Z] Copying: 999/1024 [MB] (10 MBps) [2024-11-27T00:07:22.550Z] Copying: 1010/1024 [MB] (10 MBps) [2024-11-27T00:07:22.817Z] Copying: 1021/1024 [MB] (11 MBps) [2024-11-27T00:07:22.817Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-27 00:07:22.763750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.686 [2024-11-27 00:07:22.763845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:34.686 [2024-11-27 00:07:22.763870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:34.686 [2024-11-27 00:07:22.763883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.686 [2024-11-27 00:07:22.763919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:34.686 [2024-11-27 00:07:22.764453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.686 [2024-11-27 00:07:22.764476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:34.686 [2024-11-27 00:07:22.764490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:33:34.686 [2024-11-27 00:07:22.764502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.686 [2024-11-27 00:07:22.764847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.686 [2024-11-27 00:07:22.764864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:34.686 [2024-11-27 00:07:22.764878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:33:34.686 [2024-11-27 00:07:22.764891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.686 [2024-11-27 00:07:22.764942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.686 [2024-11-27 00:07:22.764963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:34.686 [2024-11-27 00:07:22.764977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:34.686 [2024-11-27 00:07:22.764990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.686 [2024-11-27 00:07:22.765066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.686 [2024-11-27 00:07:22.765079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:34.686 [2024-11-27 00:07:22.765091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:34.686 [2024-11-27 00:07:22.765103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.686 [2024-11-27 00:07:22.765123] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:34.686 [2024-11-27 00:07:22.765140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:34.686 [2024-11-27 00:07:22.765825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.765989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:34.687 [2024-11-27 00:07:22.766408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:34.687 [2024-11-27 00:07:22.766439] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42e104c0-c23f-4953-a6b2-7de7e0f43cd2 00:33:34.687 [2024-11-27 00:07:22.766451] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:34.687 [2024-11-27 00:07:22.766462] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:34.687 [2024-11-27 00:07:22.766473] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:34.687 [2024-11-27 00:07:22.766490] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:34.687 [2024-11-27 00:07:22.766505] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:34.687 [2024-11-27 00:07:22.766518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:34.687 [2024-11-27 00:07:22.766529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:34.687 [2024-11-27 00:07:22.766539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:34.687 [2024-11-27 00:07:22.766549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:34.687 [2024-11-27 00:07:22.766560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.687 [2024-11-27 00:07:22.766571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:34.687 [2024-11-27 00:07:22.766583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:33:34.687 [2024-11-27 00:07:22.766600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.768439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.687 [2024-11-27 00:07:22.768472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:34.687 [2024-11-27 00:07:22.768487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 00:33:34.687 [2024-11-27 00:07:22.768500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.768600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.687 [2024-11-27 00:07:22.768614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:34.687 [2024-11-27 00:07:22.768633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:33:34.687 [2024-11-27 00:07:22.768646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.775732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.687 [2024-11-27 00:07:22.776114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:34.687 [2024-11-27 00:07:22.776135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.687 [2024-11-27 00:07:22.776142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.776203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.687 [2024-11-27 00:07:22.776210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:34.687 [2024-11-27 00:07:22.776221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.687 [2024-11-27 00:07:22.776231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.776282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.687 [2024-11-27 00:07:22.776291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:34.687 [2024-11-27 00:07:22.776298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.687 [2024-11-27 00:07:22.776303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.776315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.687 [2024-11-27 00:07:22.776321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:34.687 [2024-11-27 00:07:22.776327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.687 [2024-11-27 00:07:22.776335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.784629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.687 [2024-11-27 00:07:22.784659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:34.687 [2024-11-27 00:07:22.784668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.687 [2024-11-27 00:07:22.784675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.791697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.687 [2024-11-27 00:07:22.791886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:34.687 [2024-11-27 00:07:22.791903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.687 [2024-11-27 00:07:22.791910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.791930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.687 [2024-11-27 00:07:22.791937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:34.687 [2024-11-27 00:07:22.791943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.687 [2024-11-27 00:07:22.791949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.687 [2024-11-27 00:07:22.791983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.687 [2024-11-27 00:07:22.791991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:34.687 [2024-11-27 00:07:22.791996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.687 [2024-11-27 00:07:22.792002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.688 [2024-11-27 00:07:22.792045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.688 [2024-11-27 00:07:22.792055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:34.688 [2024-11-27 00:07:22.792062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.688 [2024-11-27 00:07:22.792068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.688 [2024-11-27 00:07:22.792092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.688 [2024-11-27 00:07:22.792099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:34.688 [2024-11-27 00:07:22.792106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.688 [2024-11-27 00:07:22.792111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.688 [2024-11-27 00:07:22.792143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.688 [2024-11-27 00:07:22.792150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:34.688 [2024-11-27 00:07:22.792156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.688 [2024-11-27 00:07:22.792162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.688 [2024-11-27 00:07:22.792196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.688 [2024-11-27 00:07:22.792203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:34.688 [2024-11-27 00:07:22.792210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.688 [2024-11-27 00:07:22.792215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.688 [2024-11-27 00:07:22.792311] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.551 ms, result 0 00:33:35.067 00:33:35.067 00:33:35.067 00:07:22 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:37.611 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:37.611 00:07:25 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:37.611 [2024-11-27 00:07:25.204497] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:33:37.611 [2024-11-27 00:07:25.204629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96826 ] 00:33:37.611 [2024-11-27 00:07:25.347678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.611 [2024-11-27 00:07:25.369172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.611 [2024-11-27 00:07:25.455289] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:37.611 [2024-11-27 00:07:25.455345] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:37.611 [2024-11-27 00:07:25.604004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.611 [2024-11-27 00:07:25.604038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:37.611 [2024-11-27 00:07:25.604048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:37.611 [2024-11-27 00:07:25.604057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.611 [2024-11-27 00:07:25.604090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.611 [2024-11-27 00:07:25.604097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:37.611 [2024-11-27 00:07:25.604103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:37.611 [2024-11-27 00:07:25.604112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.611 [2024-11-27 00:07:25.604128] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:37.611 [2024-11-27 00:07:25.604295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:37.611 [2024-11-27 00:07:25.604306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.611 [2024-11-27 00:07:25.604314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:37.611 [2024-11-27 00:07:25.604322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:33:37.611 [2024-11-27 00:07:25.604328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.611 [2024-11-27 00:07:25.604501] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:37.611 [2024-11-27 00:07:25.604518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.611 [2024-11-27 00:07:25.604524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:37.611 [2024-11-27 00:07:25.604530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:37.611 [2024-11-27 00:07:25.604540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.611 [2024-11-27 00:07:25.604596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.611 [2024-11-27 00:07:25.604604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:37.611 [2024-11-27 00:07:25.604611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:33:37.612 [2024-11-27 00:07:25.604617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.612 [2024-11-27 00:07:25.604808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.612 [2024-11-27 00:07:25.604818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:37.612 [2024-11-27 00:07:25.604825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:33:37.612 [2024-11-27 00:07:25.604831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.612 [2024-11-27 00:07:25.604888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.612 [2024-11-27 00:07:25.604900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:37.612 [2024-11-27 00:07:25.604907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:37.612 [2024-11-27 00:07:25.604912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.612 [2024-11-27 00:07:25.604928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.612 [2024-11-27 00:07:25.604935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:37.612 [2024-11-27 00:07:25.604941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:37.612 [2024-11-27 00:07:25.604947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.612 [2024-11-27 00:07:25.604960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:37.612 [2024-11-27 00:07:25.606172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.612 [2024-11-27 00:07:25.606327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:37.612 [2024-11-27 00:07:25.606340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:33:37.612 [2024-11-27 00:07:25.606351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.612 [2024-11-27 00:07:25.606381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.612 [2024-11-27 00:07:25.606388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:37.612 [2024-11-27 00:07:25.606395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:37.612 [2024-11-27 00:07:25.606400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.612 [2024-11-27 00:07:25.606414] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:37.612 [2024-11-27 00:07:25.606430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:37.612 [2024-11-27 00:07:25.606459] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:37.612 [2024-11-27 00:07:25.606477] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:37.612 [2024-11-27 00:07:25.606555] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:37.612 [2024-11-27 00:07:25.606563] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:37.612 [2024-11-27 00:07:25.606571] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:37.612 [2024-11-27 00:07:25.606579] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:37.612 [2024-11-27 00:07:25.606589] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:37.612 [2024-11-27 00:07:25.606596] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:37.612 [2024-11-27 00:07:25.606604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:37.612 [2024-11-27 00:07:25.606610] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:37.612 [2024-11-27 00:07:25.606615] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:37.612 [2024-11-27 00:07:25.606623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.612 [2024-11-27 00:07:25.606628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:37.612 [2024-11-27 00:07:25.606634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:33:37.612 [2024-11-27 00:07:25.606640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.612 [2024-11-27 00:07:25.606706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.612 [2024-11-27 00:07:25.606713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:37.612 [2024-11-27 00:07:25.606722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:37.612 [2024-11-27 00:07:25.606728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.612 [2024-11-27 00:07:25.606818] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:37.612 [2024-11-27 00:07:25.606827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:37.612 [2024-11-27 00:07:25.606833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:37.612 [2024-11-27 00:07:25.606841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.612 [2024-11-27 00:07:25.606847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:37.612 [2024-11-27 00:07:25.606852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:37.612 [2024-11-27 00:07:25.606858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:37.612 [2024-11-27 00:07:25.606863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:37.612 [2024-11-27 00:07:25.606869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:37.612 [2024-11-27 00:07:25.606876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:37.612 [2024-11-27 00:07:25.606882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:37.612 [2024-11-27 00:07:25.606888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:37.612 [2024-11-27 00:07:25.606893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:37.612 [2024-11-27 00:07:25.606899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:37.612 [2024-11-27 00:07:25.606905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:37.612 [2024-11-27 00:07:25.606910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.612 [2024-11-27 00:07:25.606917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:37.612 [2024-11-27 00:07:25.606923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:37.612 [2024-11-27 00:07:25.606929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.612 [2024-11-27 00:07:25.606936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:37.612 [2024-11-27 00:07:25.606942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:37.612 [2024-11-27 00:07:25.606949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:37.612 [2024-11-27 00:07:25.606955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:37.612 [2024-11-27 00:07:25.606961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:37.612 [2024-11-27 00:07:25.606966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:37.612 [2024-11-27 00:07:25.606972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:37.612 [2024-11-27 00:07:25.606977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:37.612 [2024-11-27 00:07:25.606983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:37.612 [2024-11-27 00:07:25.606988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:37.612 [2024-11-27 00:07:25.606994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:37.612 [2024-11-27 00:07:25.607000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:37.612 [2024-11-27 00:07:25.607006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:37.612 [2024-11-27 00:07:25.607013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:37.612 [2024-11-27 00:07:25.607019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:37.612 [2024-11-27 00:07:25.607024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:37.612 [2024-11-27 00:07:25.607033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:37.612 [2024-11-27 00:07:25.607039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:37.612 [2024-11-27 00:07:25.607045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:37.612 [2024-11-27 00:07:25.607051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:37.612 [2024-11-27 00:07:25.607059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.612 [2024-11-27 00:07:25.607064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:37.612 [2024-11-27 00:07:25.607070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:37.612 [2024-11-27 00:07:25.607076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.612 [2024-11-27 00:07:25.607082] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:37.612 [2024-11-27 00:07:25.607089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:37.612 [2024-11-27 00:07:25.607095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:37.612 [2024-11-27 00:07:25.607103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:37.612 [2024-11-27 00:07:25.607110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:37.612 [2024-11-27 00:07:25.607116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:37.612 [2024-11-27 00:07:25.607121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:37.612 [2024-11-27 00:07:25.607127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:37.612 [2024-11-27 00:07:25.607135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:37.612 [2024-11-27 00:07:25.607141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:37.612 [2024-11-27 00:07:25.607148] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:37.612 [2024-11-27 00:07:25.607155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:37.612 [2024-11-27 00:07:25.607162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:37.612 [2024-11-27 00:07:25.607168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:37.612 [2024-11-27 00:07:25.607174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:37.612 [2024-11-27 00:07:25.607180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:37.612 [2024-11-27 00:07:25.607186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:37.613 [2024-11-27 00:07:25.607192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:37.613 [2024-11-27 00:07:25.607198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:37.613 [2024-11-27 00:07:25.607204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:37.613 [2024-11-27 00:07:25.607211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:37.613 [2024-11-27 00:07:25.607218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:37.613 [2024-11-27 00:07:25.607226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:37.613 [2024-11-27 00:07:25.607236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:37.613 [2024-11-27 00:07:25.607243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:37.613 [2024-11-27 00:07:25.607250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:37.613 [2024-11-27 00:07:25.607256] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:37.613 [2024-11-27 00:07:25.607262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:37.613 [2024-11-27 00:07:25.607268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:37.613 [2024-11-27 00:07:25.607273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:37.613 [2024-11-27 00:07:25.607278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:37.613 [2024-11-27 00:07:25.607284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:37.613 [2024-11-27 00:07:25.607289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.607294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:37.613 [2024-11-27 00:07:25.607300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:33:37.613 [2024-11-27 00:07:25.607306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.612632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.612655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:37.613 [2024-11-27 00:07:25.612664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.295 ms 00:33:37.613 [2024-11-27 00:07:25.612671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.612729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.612735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:37.613 [2024-11-27 00:07:25.612744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:37.613 [2024-11-27 00:07:25.612750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.630840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.630885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:37.613 [2024-11-27 00:07:25.630898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.053 ms 00:33:37.613 [2024-11-27 00:07:25.630912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.630943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.630953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:37.613 [2024-11-27 00:07:25.630963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:37.613 [2024-11-27 00:07:25.630975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.631080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.631096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:37.613 [2024-11-27 00:07:25.631106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:37.613 [2024-11-27 00:07:25.631116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.631241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.631261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:37.613 [2024-11-27 00:07:25.631270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:33:37.613 [2024-11-27 00:07:25.631284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.636312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.636342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:37.613 [2024-11-27 00:07:25.636356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.007 ms 00:33:37.613 [2024-11-27 00:07:25.636364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.636452] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:37.613 [2024-11-27 00:07:25.636464] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:37.613 [2024-11-27 00:07:25.636473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.636481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:37.613 [2024-11-27 00:07:25.636490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:37.613 [2024-11-27 00:07:25.636500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.648489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.648635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:37.613 [2024-11-27 00:07:25.648648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.975 ms 00:33:37.613 [2024-11-27 00:07:25.648654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.648746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.648753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:37.613 [2024-11-27 00:07:25.648759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:33:37.613 [2024-11-27 00:07:25.648767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.648816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.648831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:37.613 [2024-11-27 00:07:25.648837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:37.613 [2024-11-27 00:07:25.648843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.649062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.649071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:37.613 [2024-11-27 00:07:25.649076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:33:37.613 [2024-11-27 00:07:25.649082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.649096] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:37.613 [2024-11-27 00:07:25.649103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.649113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:37.613 [2024-11-27 00:07:25.649119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:37.613 [2024-11-27 00:07:25.649124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.655362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:37.613 [2024-11-27 00:07:25.655539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.655549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:37.613 [2024-11-27 00:07:25.655556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.397 ms 00:33:37.613 [2024-11-27 00:07:25.655562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.657289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.657308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:37.613 [2024-11-27 00:07:25.657316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:33:37.613 [2024-11-27 00:07:25.657323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.657378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.657385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:37.613 [2024-11-27 00:07:25.657391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:37.613 [2024-11-27 00:07:25.657399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.657416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.657423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:37.613 [2024-11-27 00:07:25.657431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:37.613 [2024-11-27 00:07:25.657436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.657457] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:37.613 [2024-11-27 00:07:25.657465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.657471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:37.613 [2024-11-27 00:07:25.657480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:37.613 [2024-11-27 00:07:25.657485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.661262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.661290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:37.613 [2024-11-27 00:07:25.661298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.762 ms 00:33:37.613 [2024-11-27 00:07:25.661309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.661364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.613 [2024-11-27 00:07:25.661375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:37.613 [2024-11-27 00:07:25.661384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:37.613 [2024-11-27 00:07:25.661390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.613 [2024-11-27 00:07:25.662046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 57.733 ms, result 0 00:33:38.557  [2024-11-27T00:07:28.080Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-27T00:07:29.023Z] Copying: 31/1024 [MB] (15 MBps) [2024-11-27T00:07:29.963Z] Copying: 50/1024 [MB] (19 MBps) [2024-11-27T00:07:30.908Z] Copying: 71/1024 [MB] (20 MBps) [2024-11-27T00:07:31.850Z] Copying: 91/1024 [MB] (20 MBps) [2024-11-27T00:07:32.794Z] Copying: 111/1024 [MB] (19 MBps) [2024-11-27T00:07:33.738Z] Copying: 132/1024 [MB] (21 MBps) [2024-11-27T00:07:34.683Z] Copying: 155/1024 [MB] (22 MBps) [2024-11-27T00:07:36.073Z] Copying: 177/1024 [MB] (22 MBps) [2024-11-27T00:07:37.020Z] Copying: 197/1024 [MB] (19 MBps) [2024-11-27T00:07:37.966Z] Copying: 217/1024 [MB] (20 MBps) [2024-11-27T00:07:38.912Z] Copying: 235/1024 [MB] (17 MBps) [2024-11-27T00:07:39.856Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-27T00:07:40.800Z] Copying: 256/1024 [MB] (10 MBps) [2024-11-27T00:07:41.740Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-27T00:07:42.683Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-27T00:07:44.069Z] Copying: 290/1024 [MB] (11 MBps) [2024-11-27T00:07:45.012Z] Copying: 302/1024 [MB] (11 MBps) [2024-11-27T00:07:45.957Z] Copying: 313/1024 [MB] (11 MBps) [2024-11-27T00:07:46.902Z] Copying: 325/1024 [MB] (11 MBps) [2024-11-27T00:07:47.842Z] Copying: 336/1024 [MB] (11 MBps) [2024-11-27T00:07:48.785Z] Copying: 348/1024 [MB] (11 MBps) [2024-11-27T00:07:49.732Z] Copying: 359/1024 [MB] (11 MBps) [2024-11-27T00:07:51.116Z] Copying: 370/1024 [MB] (11 MBps) [2024-11-27T00:07:51.689Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-27T00:07:53.138Z] Copying: 393/1024 [MB] (11 MBps) [2024-11-27T00:07:53.711Z] Copying: 405/1024 [MB] (11 MBps) [2024-11-27T00:07:55.099Z] Copying: 416/1024 [MB] (11 MBps) [2024-11-27T00:07:56.044Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-27T00:07:56.988Z] Copying: 438/1024 [MB] (11 MBps) [2024-11-27T00:07:57.929Z] Copying: 449/1024 [MB] (10 MBps) [2024-11-27T00:07:58.874Z] Copying: 460/1024 [MB] (10 MBps) [2024-11-27T00:07:59.818Z] Copying: 473/1024 [MB] (12 MBps) [2024-11-27T00:08:00.764Z] Copying: 494584/1048576 [kB] (10000 kBps) [2024-11-27T00:08:01.709Z] Copying: 504488/1048576 [kB] (9904 kBps) [2024-11-27T00:08:03.097Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-27T00:08:04.042Z] Copying: 515/1024 [MB] (11 MBps) [2024-11-27T00:08:04.986Z] Copying: 526/1024 [MB] (11 MBps) [2024-11-27T00:08:05.958Z] Copying: 537/1024 [MB] (11 MBps) [2024-11-27T00:08:06.901Z] Copying: 549/1024 [MB] (11 MBps) [2024-11-27T00:08:07.846Z] Copying: 562/1024 [MB] (12 MBps) [2024-11-27T00:08:08.792Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-27T00:08:09.761Z] Copying: 596424/1048576 [kB] (10160 kBps) [2024-11-27T00:08:10.705Z] Copying: 593/1024 [MB] (11 MBps) [2024-11-27T00:08:12.090Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-27T00:08:13.033Z] Copying: 616/1024 [MB] (11 MBps) [2024-11-27T00:08:13.977Z] Copying: 628/1024 [MB] (11 MBps) [2024-11-27T00:08:14.919Z] Copying: 639/1024 [MB] (11 MBps) [2024-11-27T00:08:15.863Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-27T00:08:16.809Z] Copying: 660/1024 [MB] (11 MBps) [2024-11-27T00:08:17.753Z] Copying: 671/1024 [MB] (10 MBps) [2024-11-27T00:08:18.697Z] Copying: 683/1024 [MB] (11 MBps) [2024-11-27T00:08:20.083Z] Copying: 694/1024 [MB] (11 MBps) [2024-11-27T00:08:21.030Z] Copying: 705/1024 [MB] (11 MBps) [2024-11-27T00:08:22.022Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-27T00:08:23.045Z] Copying: 727/1024 [MB] (11 MBps) [2024-11-27T00:08:23.989Z] Copying: 739/1024 [MB] (11 MBps) [2024-11-27T00:08:24.932Z] Copying: 750/1024 [MB] (11 MBps) [2024-11-27T00:08:25.877Z] Copying: 762/1024 [MB] (11 MBps) [2024-11-27T00:08:26.823Z] Copying: 772/1024 [MB] (10 MBps) [2024-11-27T00:08:27.767Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-27T00:08:28.712Z] Copying: 796/1024 [MB] (11 MBps) [2024-11-27T00:08:30.099Z] Copying: 807/1024 [MB] (11 MBps) [2024-11-27T00:08:31.042Z] Copying: 818/1024 [MB] (11 MBps) [2024-11-27T00:08:31.988Z] Copying: 830/1024 [MB] (11 MBps) [2024-11-27T00:08:32.932Z] Copying: 842/1024 [MB] (12 MBps) [2024-11-27T00:08:33.877Z] Copying: 853/1024 [MB] (10 MBps) [2024-11-27T00:08:34.822Z] Copying: 864/1024 [MB] (11 MBps) [2024-11-27T00:08:35.768Z] Copying: 875/1024 [MB] (11 MBps) [2024-11-27T00:08:36.713Z] Copying: 906992/1048576 [kB] (10220 kBps) [2024-11-27T00:08:38.103Z] Copying: 897/1024 [MB] (11 MBps) [2024-11-27T00:08:38.677Z] Copying: 908/1024 [MB] (11 MBps) [2024-11-27T00:08:40.066Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-27T00:08:41.010Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-27T00:08:41.953Z] Copying: 941/1024 [MB] (11 MBps) [2024-11-27T00:08:42.898Z] Copying: 952/1024 [MB] (11 MBps) [2024-11-27T00:08:43.843Z] Copying: 963/1024 [MB] (11 MBps) [2024-11-27T00:08:44.787Z] Copying: 973/1024 [MB] (10 MBps) [2024-11-27T00:08:45.731Z] Copying: 984/1024 [MB] (11 MBps) [2024-11-27T00:08:46.677Z] Copying: 995/1024 [MB] (11 MBps) [2024-11-27T00:08:48.067Z] Copying: 1006/1024 [MB] (10 MBps) [2024-11-27T00:08:49.013Z] Copying: 1017/1024 [MB] (11 MBps) [2024-11-27T00:08:49.013Z] Copying: 1048292/1048576 [kB] (5872 kBps) [2024-11-27T00:08:49.013Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-27 00:08:48.974044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.882 [2024-11-27 00:08:48.974421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:00.882 [2024-11-27 00:08:48.974449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:00.882 [2024-11-27 00:08:48.974458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.882 [2024-11-27 00:08:48.977406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:00.882 [2024-11-27 00:08:48.979360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.882 [2024-11-27 00:08:48.979400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:00.882 [2024-11-27 00:08:48.979413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.815 ms 00:35:00.882 [2024-11-27 00:08:48.979420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.882 [2024-11-27 00:08:48.990355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.882 [2024-11-27 00:08:48.990394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:00.882 [2024-11-27 00:08:48.990406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.788 ms 00:35:00.882 [2024-11-27 00:08:48.990414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.882 [2024-11-27 00:08:48.990451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.882 [2024-11-27 00:08:48.990462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:00.882 [2024-11-27 00:08:48.990471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:00.882 [2024-11-27 00:08:48.990481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.882 [2024-11-27 00:08:48.990529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.882 [2024-11-27 00:08:48.990540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:00.882 [2024-11-27 00:08:48.990548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:00.882 [2024-11-27 00:08:48.990556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.882 [2024-11-27 00:08:48.990569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:00.882 [2024-11-27 00:08:48.990580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125440 / 261120 wr_cnt: 1 state: open 00:35:00.882 [2024-11-27 00:08:48.990590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:00.882 [2024-11-27 00:08:48.990957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.990965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.990972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.990978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.990986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.990995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:00.883 [2024-11-27 00:08:48.991362] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:00.883 [2024-11-27 00:08:48.991373] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42e104c0-c23f-4953-a6b2-7de7e0f43cd2 00:35:00.883 [2024-11-27 00:08:48.991381] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125440 00:35:00.883 [2024-11-27 00:08:48.991388] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125472 00:35:00.883 [2024-11-27 00:08:48.991395] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125440 00:35:00.883 [2024-11-27 00:08:48.991402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:35:00.883 [2024-11-27 00:08:48.991411] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:00.883 [2024-11-27 00:08:48.991418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:00.883 [2024-11-27 00:08:48.991426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:00.883 [2024-11-27 00:08:48.991432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:00.883 [2024-11-27 00:08:48.991438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:00.883 [2024-11-27 00:08:48.991445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.883 [2024-11-27 00:08:48.991452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:00.883 [2024-11-27 00:08:48.991459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:35:00.883 [2024-11-27 00:08:48.991467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.883 [2024-11-27 00:08:48.993179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.883 [2024-11-27 00:08:48.993205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:00.883 [2024-11-27 00:08:48.993220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:35:00.883 [2024-11-27 00:08:48.993231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.883 [2024-11-27 00:08:48.993321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.883 [2024-11-27 00:08:48.993330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:00.883 [2024-11-27 00:08:48.993342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:35:00.883 [2024-11-27 00:08:48.993349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.883 [2024-11-27 00:08:48.998912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.883 [2024-11-27 00:08:48.998949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:00.883 [2024-11-27 00:08:48.998959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.883 [2024-11-27 00:08:48.998967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.883 [2024-11-27 00:08:48.999018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.883 [2024-11-27 00:08:48.999026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:00.883 [2024-11-27 00:08:48.999033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.883 [2024-11-27 00:08:48.999041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.883 [2024-11-27 00:08:48.999092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.883 [2024-11-27 00:08:48.999103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:00.883 [2024-11-27 00:08:48.999114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.883 [2024-11-27 00:08:48.999122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.883 [2024-11-27 00:08:48.999136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.883 [2024-11-27 00:08:48.999143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:00.883 [2024-11-27 00:08:48.999151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.883 [2024-11-27 00:08:48.999158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.883 [2024-11-27 00:08:49.009646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.145 [2024-11-27 00:08:49.009879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:01.145 [2024-11-27 00:08:49.009896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.145 [2024-11-27 00:08:49.009910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.145 [2024-11-27 00:08:49.018593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.145 [2024-11-27 00:08:49.018632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:01.145 [2024-11-27 00:08:49.018644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.145 [2024-11-27 00:08:49.018651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.145 [2024-11-27 00:08:49.018714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.145 [2024-11-27 00:08:49.018723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:01.145 [2024-11-27 00:08:49.018732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.145 [2024-11-27 00:08:49.018746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.145 [2024-11-27 00:08:49.018775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.145 [2024-11-27 00:08:49.018783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:01.145 [2024-11-27 00:08:49.019014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.145 [2024-11-27 00:08:49.019038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.145 [2024-11-27 00:08:49.019105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.145 [2024-11-27 00:08:49.019131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:01.145 [2024-11-27 00:08:49.019151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.145 [2024-11-27 00:08:49.019174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.145 [2024-11-27 00:08:49.019219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.146 [2024-11-27 00:08:49.019245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:01.146 [2024-11-27 00:08:49.019265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.146 [2024-11-27 00:08:49.019274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.146 [2024-11-27 00:08:49.019309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.146 [2024-11-27 00:08:49.019318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:01.146 [2024-11-27 00:08:49.019326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.146 [2024-11-27 00:08:49.019336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.146 [2024-11-27 00:08:49.019383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.146 [2024-11-27 00:08:49.019394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:01.146 [2024-11-27 00:08:49.019402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.146 [2024-11-27 00:08:49.019410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.146 [2024-11-27 00:08:49.019530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 47.736 ms, result 0 00:35:01.718 00:35:01.718 00:35:01.718 00:08:49 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:01.718 [2024-11-27 00:08:49.813316] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:35:01.718 [2024-11-27 00:08:49.813499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97666 ] 00:35:01.979 [2024-11-27 00:08:49.959078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.979 [2024-11-27 00:08:49.986716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.979 [2024-11-27 00:08:50.099437] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:01.979 [2024-11-27 00:08:50.099533] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:02.244 [2024-11-27 00:08:50.262165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.262246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:02.244 [2024-11-27 00:08:50.262263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:02.244 [2024-11-27 00:08:50.262273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.262335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.262350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:02.244 [2024-11-27 00:08:50.262360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:35:02.244 [2024-11-27 00:08:50.262375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.262404] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:02.244 [2024-11-27 00:08:50.262678] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:02.244 [2024-11-27 00:08:50.262706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.262715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:02.244 [2024-11-27 00:08:50.262728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:35:02.244 [2024-11-27 00:08:50.262737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.263060] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:02.244 [2024-11-27 00:08:50.263090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.263106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:02.244 [2024-11-27 00:08:50.263118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:35:02.244 [2024-11-27 00:08:50.263130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.263191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.263202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:02.244 [2024-11-27 00:08:50.263211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:35:02.244 [2024-11-27 00:08:50.263219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.263482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.263508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:02.244 [2024-11-27 00:08:50.263517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:35:02.244 [2024-11-27 00:08:50.263526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.263825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.263854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:02.244 [2024-11-27 00:08:50.263866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:35:02.244 [2024-11-27 00:08:50.263874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.263909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.263919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:02.244 [2024-11-27 00:08:50.263928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:02.244 [2024-11-27 00:08:50.263936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.263965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:02.244 [2024-11-27 00:08:50.266143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.266207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:02.244 [2024-11-27 00:08:50.266219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:35:02.244 [2024-11-27 00:08:50.266229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.266265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.266275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:02.244 [2024-11-27 00:08:50.266287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:35:02.244 [2024-11-27 00:08:50.266297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.266360] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:02.244 [2024-11-27 00:08:50.266387] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:02.244 [2024-11-27 00:08:50.266430] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:02.244 [2024-11-27 00:08:50.266451] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:02.244 [2024-11-27 00:08:50.266559] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:02.244 [2024-11-27 00:08:50.266573] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:02.244 [2024-11-27 00:08:50.266589] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:02.244 [2024-11-27 00:08:50.266602] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:02.244 [2024-11-27 00:08:50.266614] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:02.244 [2024-11-27 00:08:50.266623] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:02.244 [2024-11-27 00:08:50.266631] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:02.244 [2024-11-27 00:08:50.266639] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:02.244 [2024-11-27 00:08:50.266646] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:02.244 [2024-11-27 00:08:50.266656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.266664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:02.244 [2024-11-27 00:08:50.266677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:35:02.244 [2024-11-27 00:08:50.266688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.266773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.244 [2024-11-27 00:08:50.266784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:02.244 [2024-11-27 00:08:50.266820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:02.244 [2024-11-27 00:08:50.266829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.244 [2024-11-27 00:08:50.266935] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:02.244 [2024-11-27 00:08:50.266948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:02.244 [2024-11-27 00:08:50.266958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:02.244 [2024-11-27 00:08:50.266966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:02.244 [2024-11-27 00:08:50.266975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:02.244 [2024-11-27 00:08:50.266982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:02.244 [2024-11-27 00:08:50.266990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:02.244 [2024-11-27 00:08:50.266998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:02.244 [2024-11-27 00:08:50.267005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:02.244 [2024-11-27 00:08:50.267013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:02.244 [2024-11-27 00:08:50.267020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:02.244 [2024-11-27 00:08:50.267035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:02.244 [2024-11-27 00:08:50.267043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:02.244 [2024-11-27 00:08:50.267051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:02.245 [2024-11-27 00:08:50.267057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:02.245 [2024-11-27 00:08:50.267064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:02.245 [2024-11-27 00:08:50.267078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:02.245 [2024-11-27 00:08:50.267084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:02.245 [2024-11-27 00:08:50.267098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:02.245 [2024-11-27 00:08:50.267113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:02.245 [2024-11-27 00:08:50.267124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:02.245 [2024-11-27 00:08:50.267138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:02.245 [2024-11-27 00:08:50.267144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:02.245 [2024-11-27 00:08:50.267160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:02.245 [2024-11-27 00:08:50.267166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:02.245 [2024-11-27 00:08:50.267179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:02.245 [2024-11-27 00:08:50.267185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:02.245 [2024-11-27 00:08:50.267201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:02.245 [2024-11-27 00:08:50.267207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:02.245 [2024-11-27 00:08:50.267213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:02.245 [2024-11-27 00:08:50.267220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:02.245 [2024-11-27 00:08:50.267226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:02.245 [2024-11-27 00:08:50.267232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:02.245 [2024-11-27 00:08:50.267245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:02.245 [2024-11-27 00:08:50.267251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267265] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:02.245 [2024-11-27 00:08:50.267275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:02.245 [2024-11-27 00:08:50.267283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:02.245 [2024-11-27 00:08:50.267293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:02.245 [2024-11-27 00:08:50.267301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:02.245 [2024-11-27 00:08:50.267307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:02.245 [2024-11-27 00:08:50.267314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:02.245 [2024-11-27 00:08:50.267321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:02.245 [2024-11-27 00:08:50.267327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:02.245 [2024-11-27 00:08:50.267333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:02.245 [2024-11-27 00:08:50.267342] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:02.245 [2024-11-27 00:08:50.267352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:02.245 [2024-11-27 00:08:50.267367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:02.245 [2024-11-27 00:08:50.267374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:02.245 [2024-11-27 00:08:50.267381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:02.245 [2024-11-27 00:08:50.267387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:02.245 [2024-11-27 00:08:50.267396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:02.245 [2024-11-27 00:08:50.267404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:02.245 [2024-11-27 00:08:50.267413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:02.245 [2024-11-27 00:08:50.267423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:02.245 [2024-11-27 00:08:50.267431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:02.245 [2024-11-27 00:08:50.267438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:02.245 [2024-11-27 00:08:50.267446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:02.245 [2024-11-27 00:08:50.267459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:02.245 [2024-11-27 00:08:50.267468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:02.245 [2024-11-27 00:08:50.267476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:02.245 [2024-11-27 00:08:50.267484] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:02.245 [2024-11-27 00:08:50.267500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:02.245 [2024-11-27 00:08:50.267508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:02.245 [2024-11-27 00:08:50.267517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:02.245 [2024-11-27 00:08:50.267524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:02.245 [2024-11-27 00:08:50.267531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:02.245 [2024-11-27 00:08:50.267542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.267551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:02.245 [2024-11-27 00:08:50.267559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:35:02.245 [2024-11-27 00:08:50.267567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.277693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.277744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:02.245 [2024-11-27 00:08:50.277756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.082 ms 00:35:02.245 [2024-11-27 00:08:50.277769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.277883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.277902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:02.245 [2024-11-27 00:08:50.277912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:35:02.245 [2024-11-27 00:08:50.277921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.302572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.302650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:02.245 [2024-11-27 00:08:50.302672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.592 ms 00:35:02.245 [2024-11-27 00:08:50.302688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.302760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.302780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:02.245 [2024-11-27 00:08:50.302828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:02.245 [2024-11-27 00:08:50.302844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.303036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.303066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:02.245 [2024-11-27 00:08:50.303083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:35:02.245 [2024-11-27 00:08:50.303098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.303326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.303369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:02.245 [2024-11-27 00:08:50.303388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:35:02.245 [2024-11-27 00:08:50.303404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.312253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.312302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:02.245 [2024-11-27 00:08:50.312321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.813 ms 00:35:02.245 [2024-11-27 00:08:50.312329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.312447] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:02.245 [2024-11-27 00:08:50.312463] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:02.245 [2024-11-27 00:08:50.312477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.245 [2024-11-27 00:08:50.312487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:02.245 [2024-11-27 00:08:50.312498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:35:02.245 [2024-11-27 00:08:50.312508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.245 [2024-11-27 00:08:50.324994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.325291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:02.246 [2024-11-27 00:08:50.325315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.469 ms 00:35:02.246 [2024-11-27 00:08:50.325332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.325468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.325480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:02.246 [2024-11-27 00:08:50.325489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:35:02.246 [2024-11-27 00:08:50.325501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.325552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.325571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:02.246 [2024-11-27 00:08:50.325582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:02.246 [2024-11-27 00:08:50.325590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.325941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.325964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:02.246 [2024-11-27 00:08:50.325975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:35:02.246 [2024-11-27 00:08:50.325989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.326007] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:02.246 [2024-11-27 00:08:50.326034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.326052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:02.246 [2024-11-27 00:08:50.326067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:02.246 [2024-11-27 00:08:50.326077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.335527] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:02.246 [2024-11-27 00:08:50.335687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.335699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:02.246 [2024-11-27 00:08:50.335710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.582 ms 00:35:02.246 [2024-11-27 00:08:50.335718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.338543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.338583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:02.246 [2024-11-27 00:08:50.338595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:35:02.246 [2024-11-27 00:08:50.338609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.338691] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:02.246 [2024-11-27 00:08:50.339290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.339312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:02.246 [2024-11-27 00:08:50.339324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:35:02.246 [2024-11-27 00:08:50.339335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.339363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.339374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:02.246 [2024-11-27 00:08:50.339384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:02.246 [2024-11-27 00:08:50.339393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.339429] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:02.246 [2024-11-27 00:08:50.339440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.339449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:02.246 [2024-11-27 00:08:50.339458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:02.246 [2024-11-27 00:08:50.339473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.345929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.345982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:02.246 [2024-11-27 00:08:50.345995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.437 ms 00:35:02.246 [2024-11-27 00:08:50.346003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.346092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:02.246 [2024-11-27 00:08:50.346102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:02.246 [2024-11-27 00:08:50.346113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:35:02.246 [2024-11-27 00:08:50.346120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:02.246 [2024-11-27 00:08:50.347508] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 84.899 ms, result 0 00:35:03.635  [2024-11-27T00:08:52.713Z] Copying: 10008/1048576 [kB] (10008 kBps) [2024-11-27T00:08:53.658Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-27T00:08:54.603Z] Copying: 31/1024 [MB] (11 MBps) [2024-11-27T00:08:55.548Z] Copying: 43/1024 [MB] (11 MBps) [2024-11-27T00:08:56.937Z] Copying: 54/1024 [MB] (10 MBps) [2024-11-27T00:08:57.881Z] Copying: 64/1024 [MB] (10 MBps) [2024-11-27T00:08:58.825Z] Copying: 76/1024 [MB] (11 MBps) [2024-11-27T00:08:59.768Z] Copying: 88/1024 [MB] (11 MBps) [2024-11-27T00:09:00.714Z] Copying: 100/1024 [MB] (11 MBps) [2024-11-27T00:09:01.659Z] Copying: 112/1024 [MB] (12 MBps) [2024-11-27T00:09:02.604Z] Copying: 126/1024 [MB] (13 MBps) [2024-11-27T00:09:03.548Z] Copying: 137/1024 [MB] (11 MBps) [2024-11-27T00:09:04.932Z] Copying: 149/1024 [MB] (11 MBps) [2024-11-27T00:09:05.874Z] Copying: 161/1024 [MB] (11 MBps) [2024-11-27T00:09:06.818Z] Copying: 173/1024 [MB] (11 MBps) [2024-11-27T00:09:07.763Z] Copying: 185/1024 [MB] (11 MBps) [2024-11-27T00:09:08.708Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-27T00:09:09.652Z] Copying: 206/1024 [MB] (11 MBps) [2024-11-27T00:09:10.595Z] Copying: 217/1024 [MB] (11 MBps) [2024-11-27T00:09:11.984Z] Copying: 229/1024 [MB] (11 MBps) [2024-11-27T00:09:12.558Z] Copying: 240/1024 [MB] (11 MBps) [2024-11-27T00:09:13.588Z] Copying: 251/1024 [MB] (10 MBps) [2024-11-27T00:09:14.980Z] Copying: 266/1024 [MB] (15 MBps) [2024-11-27T00:09:15.553Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-27T00:09:16.937Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-27T00:09:17.878Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-27T00:09:18.821Z] Copying: 310/1024 [MB] (11 MBps) [2024-11-27T00:09:19.775Z] Copying: 322/1024 [MB] (11 MBps) [2024-11-27T00:09:20.721Z] Copying: 335/1024 [MB] (12 MBps) [2024-11-27T00:09:21.666Z] Copying: 349/1024 [MB] (13 MBps) [2024-11-27T00:09:22.610Z] Copying: 359/1024 [MB] (10 MBps) [2024-11-27T00:09:23.554Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-27T00:09:24.940Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-27T00:09:25.885Z] Copying: 395/1024 [MB] (13 MBps) [2024-11-27T00:09:26.828Z] Copying: 407/1024 [MB] (11 MBps) [2024-11-27T00:09:27.772Z] Copying: 419/1024 [MB] (11 MBps) [2024-11-27T00:09:28.717Z] Copying: 430/1024 [MB] (11 MBps) [2024-11-27T00:09:29.662Z] Copying: 441/1024 [MB] (11 MBps) [2024-11-27T00:09:30.607Z] Copying: 453/1024 [MB] (11 MBps) [2024-11-27T00:09:31.552Z] Copying: 464/1024 [MB] (11 MBps) [2024-11-27T00:09:32.941Z] Copying: 477/1024 [MB] (12 MBps) [2024-11-27T00:09:33.886Z] Copying: 488/1024 [MB] (11 MBps) [2024-11-27T00:09:34.832Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-27T00:09:35.779Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-27T00:09:36.724Z] Copying: 524/1024 [MB] (13 MBps) [2024-11-27T00:09:37.670Z] Copying: 538/1024 [MB] (13 MBps) [2024-11-27T00:09:38.616Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-27T00:09:39.558Z] Copying: 559/1024 [MB] (10 MBps) [2024-11-27T00:09:40.974Z] Copying: 570/1024 [MB] (11 MBps) [2024-11-27T00:09:41.593Z] Copying: 589/1024 [MB] (19 MBps) [2024-11-27T00:09:42.980Z] Copying: 600/1024 [MB] (10 MBps) [2024-11-27T00:09:43.552Z] Copying: 612/1024 [MB] (11 MBps) [2024-11-27T00:09:44.941Z] Copying: 623/1024 [MB] (10 MBps) [2024-11-27T00:09:45.886Z] Copying: 634/1024 [MB] (10 MBps) [2024-11-27T00:09:46.833Z] Copying: 645/1024 [MB] (10 MBps) [2024-11-27T00:09:47.781Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-27T00:09:48.730Z] Copying: 667/1024 [MB] (10 MBps) [2024-11-27T00:09:49.675Z] Copying: 678/1024 [MB] (11 MBps) [2024-11-27T00:09:50.618Z] Copying: 689/1024 [MB] (10 MBps) [2024-11-27T00:09:51.565Z] Copying: 702/1024 [MB] (13 MBps) [2024-11-27T00:09:52.954Z] Copying: 713/1024 [MB] (10 MBps) [2024-11-27T00:09:53.900Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-27T00:09:54.846Z] Copying: 735/1024 [MB] (11 MBps) [2024-11-27T00:09:55.792Z] Copying: 747/1024 [MB] (11 MBps) [2024-11-27T00:09:56.738Z] Copying: 757/1024 [MB] (10 MBps) [2024-11-27T00:09:57.682Z] Copying: 769/1024 [MB] (11 MBps) [2024-11-27T00:09:58.626Z] Copying: 783/1024 [MB] (13 MBps) [2024-11-27T00:09:59.570Z] Copying: 807/1024 [MB] (23 MBps) [2024-11-27T00:10:00.957Z] Copying: 831/1024 [MB] (24 MBps) [2024-11-27T00:10:01.902Z] Copying: 847/1024 [MB] (15 MBps) [2024-11-27T00:10:02.848Z] Copying: 870/1024 [MB] (23 MBps) [2024-11-27T00:10:03.794Z] Copying: 886/1024 [MB] (15 MBps) [2024-11-27T00:10:04.737Z] Copying: 897/1024 [MB] (11 MBps) [2024-11-27T00:10:05.683Z] Copying: 912/1024 [MB] (14 MBps) [2024-11-27T00:10:06.629Z] Copying: 925/1024 [MB] (12 MBps) [2024-11-27T00:10:07.573Z] Copying: 943/1024 [MB] (18 MBps) [2024-11-27T00:10:08.961Z] Copying: 962/1024 [MB] (18 MBps) [2024-11-27T00:10:09.904Z] Copying: 979/1024 [MB] (16 MBps) [2024-11-27T00:10:10.849Z] Copying: 993/1024 [MB] (14 MBps) [2024-11-27T00:10:11.793Z] Copying: 1008/1024 [MB] (15 MBps) [2024-11-27T00:10:11.793Z] Copying: 1022/1024 [MB] (13 MBps) [2024-11-27T00:10:12.056Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-27 00:10:11.892028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.925 [2024-11-27 00:10:11.892458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:23.925 [2024-11-27 00:10:11.892490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:23.925 [2024-11-27 00:10:11.892511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.925 [2024-11-27 00:10:11.892554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:23.925 [2024-11-27 00:10:11.893569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.925 [2024-11-27 00:10:11.893608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:23.925 [2024-11-27 00:10:11.893622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:36:23.925 [2024-11-27 00:10:11.893640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.925 [2024-11-27 00:10:11.893927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.925 [2024-11-27 00:10:11.893941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:23.925 [2024-11-27 00:10:11.893951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:36:23.925 [2024-11-27 00:10:11.893961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.925 [2024-11-27 00:10:11.894001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.925 [2024-11-27 00:10:11.894012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:23.925 [2024-11-27 00:10:11.894023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:23.925 [2024-11-27 00:10:11.894033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.925 [2024-11-27 00:10:11.894108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.925 [2024-11-27 00:10:11.894122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:23.925 [2024-11-27 00:10:11.894132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:36:23.925 [2024-11-27 00:10:11.894140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.925 [2024-11-27 00:10:11.894156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:23.925 [2024-11-27 00:10:11.894177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:23.925 [2024-11-27 00:10:11.894200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.894689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.895543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:23.925 [2024-11-27 00:10:11.896676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.896699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.896723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.896747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.896769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.897234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.897369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.897467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.898998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:23.926 [2024-11-27 00:10:11.899289] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:23.926 [2024-11-27 00:10:11.899316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42e104c0-c23f-4953-a6b2-7de7e0f43cd2 00:36:23.926 [2024-11-27 00:10:11.899353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:23.926 [2024-11-27 00:10:11.899375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5664 00:36:23.926 [2024-11-27 00:10:11.899396] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5632 00:36:23.926 [2024-11-27 00:10:11.899439] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0057 00:36:23.926 [2024-11-27 00:10:11.899460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:23.926 [2024-11-27 00:10:11.899482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:23.926 [2024-11-27 00:10:11.899506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:23.926 [2024-11-27 00:10:11.899528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:23.926 [2024-11-27 00:10:11.899547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:23.926 [2024-11-27 00:10:11.899581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.926 [2024-11-27 00:10:11.899607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:23.926 [2024-11-27 00:10:11.899632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.423 ms 00:36:23.926 [2024-11-27 00:10:11.899659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.926 [2024-11-27 00:10:11.904550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.926 [2024-11-27 00:10:11.904898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:23.926 [2024-11-27 00:10:11.905093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:36:23.926 [2024-11-27 00:10:11.905421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.926 [2024-11-27 00:10:11.905597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.926 [2024-11-27 00:10:11.905739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:23.926 [2024-11-27 00:10:11.905769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:36:23.926 [2024-11-27 00:10:11.905816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.926 [2024-11-27 00:10:11.915404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.926 [2024-11-27 00:10:11.915577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:23.926 [2024-11-27 00:10:11.915634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.926 [2024-11-27 00:10:11.915658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.926 [2024-11-27 00:10:11.915760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.926 [2024-11-27 00:10:11.915786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:23.926 [2024-11-27 00:10:11.915825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.926 [2024-11-27 00:10:11.915846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.926 [2024-11-27 00:10:11.915922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.926 [2024-11-27 00:10:11.915958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:23.926 [2024-11-27 00:10:11.916086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.926 [2024-11-27 00:10:11.916112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.926 [2024-11-27 00:10:11.916144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.926 [2024-11-27 00:10:11.916168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:23.926 [2024-11-27 00:10:11.916190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.926 [2024-11-27 00:10:11.916212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.926 [2024-11-27 00:10:11.930784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.926 [2024-11-27 00:10:11.930980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:23.927 [2024-11-27 00:10:11.931035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.927 [2024-11-27 00:10:11.931058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.927 [2024-11-27 00:10:11.942328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.927 [2024-11-27 00:10:11.942499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:23.927 [2024-11-27 00:10:11.942553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.927 [2024-11-27 00:10:11.942577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.927 [2024-11-27 00:10:11.942640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.927 [2024-11-27 00:10:11.942665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:23.927 [2024-11-27 00:10:11.942694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.927 [2024-11-27 00:10:11.942714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.927 [2024-11-27 00:10:11.942762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.927 [2024-11-27 00:10:11.942856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:23.927 [2024-11-27 00:10:11.942881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.927 [2024-11-27 00:10:11.942903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.927 [2024-11-27 00:10:11.942973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.927 [2024-11-27 00:10:11.943012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:23.927 [2024-11-27 00:10:11.943150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.927 [2024-11-27 00:10:11.943176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.927 [2024-11-27 00:10:11.943222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.927 [2024-11-27 00:10:11.943285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:23.927 [2024-11-27 00:10:11.943309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.927 [2024-11-27 00:10:11.943331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.927 [2024-11-27 00:10:11.943386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.927 [2024-11-27 00:10:11.943411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:23.927 [2024-11-27 00:10:11.943432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.927 [2024-11-27 00:10:11.943458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.927 [2024-11-27 00:10:11.943519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:23.927 [2024-11-27 00:10:11.943545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:23.927 [2024-11-27 00:10:11.943565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:23.927 [2024-11-27 00:10:11.943586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.927 [2024-11-27 00:10:11.943737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 51.680 ms, result 0 00:36:24.188 00:36:24.188 00:36:24.188 00:10:12 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:26.735 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94854 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94854 ']' 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94854 00:36:26.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94854) - No such process 00:36:26.735 Process with pid 94854 is not found 00:36:26.735 Remove shared memory files 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94854 is not found' 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_band_md /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_l2p_l1 /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_l2p_l2 /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_l2p_l2_ctx /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_nvc_md /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_p2l_pool /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_sb /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_sb_shm /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_trim_bitmap /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_trim_log /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_trim_md /dev/hugepages/ftl_42e104c0-c23f-4953-a6b2-7de7e0f43cd2_vmap 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:26.735 ************************************ 00:36:26.735 END TEST ftl_restore_fast 00:36:26.735 ************************************ 00:36:26.735 00:36:26.735 real 6m1.612s 00:36:26.735 user 5m49.853s 00:36:26.735 sys 0m11.490s 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.735 00:10:14 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:26.735 Process with pid 86106 is not found 00:36:26.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.735 00:10:14 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:26.735 00:10:14 ftl -- ftl/ftl.sh@14 -- # killprocess 86106 00:36:26.735 00:10:14 ftl -- common/autotest_common.sh@954 -- # '[' -z 86106 ']' 00:36:26.735 00:10:14 ftl -- common/autotest_common.sh@958 -- # kill -0 86106 00:36:26.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86106) - No such process 00:36:26.735 00:10:14 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 86106 is not found' 00:36:26.735 00:10:14 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:26.735 00:10:14 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98530 00:36:26.736 00:10:14 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98530 00:36:26.736 00:10:14 ftl -- common/autotest_common.sh@835 -- # '[' -z 98530 ']' 00:36:26.736 00:10:14 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.736 00:10:14 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:26.736 00:10:14 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.736 00:10:14 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:26.736 00:10:14 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:26.736 00:10:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:26.736 [2024-11-27 00:10:14.759888] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 22.11.4 initialization... 00:36:26.736 [2024-11-27 00:10:14.760169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98530 ] 00:36:26.995 [2024-11-27 00:10:14.906418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.996 [2024-11-27 00:10:14.932988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.568 00:10:15 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.568 00:10:15 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:27.568 00:10:15 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:27.829 nvme0n1 00:36:27.829 00:10:15 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:27.829 00:10:15 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:27.829 00:10:15 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:28.090 00:10:16 ftl -- ftl/common.sh@28 -- # stores=ff582a7e-0643-4fc5-b08b-2b9193323c86 00:36:28.090 00:10:16 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:28.090 00:10:16 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff582a7e-0643-4fc5-b08b-2b9193323c86 00:36:28.351 00:10:16 ftl -- ftl/ftl.sh@23 -- # killprocess 98530 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@954 -- # '[' -z 98530 ']' 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@958 -- # kill -0 98530 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@959 -- # uname 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98530 00:36:28.351 killing process with pid 98530 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98530' 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@973 -- # kill 98530 00:36:28.351 00:10:16 ftl -- common/autotest_common.sh@978 -- # wait 98530 00:36:28.612 00:10:16 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:28.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:28.873 Waiting for block devices as requested 00:36:28.873 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:29.135 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:29.135 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:29.397 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:34.792 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:34.792 Remove shared memory files 00:36:34.792 00:10:22 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:34.792 00:10:22 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:34.792 00:10:22 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:34.792 00:10:22 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:34.792 00:10:22 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:34.792 00:10:22 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:34.792 00:10:22 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:34.792 ************************************ 00:36:34.792 END TEST ftl 00:36:34.792 ************************************ 00:36:34.792 00:36:34.792 real 18m56.887s 00:36:34.792 user 20m47.789s 00:36:34.792 sys 1m37.884s 00:36:34.792 00:10:22 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.792 00:10:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:34.792 00:10:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:34.792 00:10:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:34.792 00:10:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:34.792 00:10:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:34.792 00:10:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:34.792 00:10:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:34.792 00:10:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:34.792 00:10:22 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:34.792 00:10:22 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:34.792 00:10:22 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:34.792 00:10:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:34.792 00:10:22 -- common/autotest_common.sh@10 -- # set +x 00:36:34.792 00:10:22 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:34.792 00:10:22 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:34.792 00:10:22 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:34.792 00:10:22 -- common/autotest_common.sh@10 -- # set +x 00:36:36.180 INFO: APP EXITING 00:36:36.180 INFO: killing all VMs 00:36:36.180 INFO: killing vhost app 00:36:36.180 INFO: EXIT DONE 00:36:36.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:36.754 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:36.754 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:36.754 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:36.754 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:37.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:37.591 Cleaning 00:36:37.591 Removing: /var/run/dpdk/spdk0/config 00:36:37.591 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:37.591 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:37.591 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:37.591 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:37.591 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:37.591 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:37.591 Removing: /var/run/dpdk/spdk0 00:36:37.591 Removing: /var/run/dpdk/spdk_pid68982 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69145 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69341 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69423 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69452 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69563 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69581 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69764 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69837 00:36:37.591 Removing: /var/run/dpdk/spdk_pid69917 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70017 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70097 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70137 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70168 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70238 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70322 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70742 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70789 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70836 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70852 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70910 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70926 00:36:37.591 Removing: /var/run/dpdk/spdk_pid70984 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71000 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71042 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71060 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71102 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71120 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71247 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71278 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71367 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71528 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71596 00:36:37.591 Removing: /var/run/dpdk/spdk_pid71621 00:36:37.591 Removing: /var/run/dpdk/spdk_pid72039 00:36:37.591 Removing: /var/run/dpdk/spdk_pid72130 00:36:37.591 Removing: /var/run/dpdk/spdk_pid72230 00:36:37.591 Removing: /var/run/dpdk/spdk_pid72267 00:36:37.591 Removing: /var/run/dpdk/spdk_pid72292 00:36:37.591 Removing: /var/run/dpdk/spdk_pid72365 00:36:37.591 Removing: /var/run/dpdk/spdk_pid72984 00:36:37.591 Removing: /var/run/dpdk/spdk_pid73015 00:36:37.591 Removing: /var/run/dpdk/spdk_pid73479 00:36:37.591 Removing: /var/run/dpdk/spdk_pid73571 00:36:37.591 Removing: /var/run/dpdk/spdk_pid73675 00:36:37.591 Removing: /var/run/dpdk/spdk_pid73717 00:36:37.591 Removing: /var/run/dpdk/spdk_pid73737 00:36:37.591 Removing: /var/run/dpdk/spdk_pid73757 00:36:37.591 Removing: /var/run/dpdk/spdk_pid75594 00:36:37.591 Removing: /var/run/dpdk/spdk_pid75713 00:36:37.591 Removing: /var/run/dpdk/spdk_pid75724 00:36:37.591 Removing: /var/run/dpdk/spdk_pid75736 00:36:37.591 Removing: /var/run/dpdk/spdk_pid75776 00:36:37.592 Removing: /var/run/dpdk/spdk_pid75780 00:36:37.592 Removing: /var/run/dpdk/spdk_pid75792 00:36:37.592 Removing: /var/run/dpdk/spdk_pid75837 00:36:37.592 Removing: /var/run/dpdk/spdk_pid75841 00:36:37.592 Removing: /var/run/dpdk/spdk_pid75853 00:36:37.592 Removing: /var/run/dpdk/spdk_pid75892 00:36:37.592 Removing: /var/run/dpdk/spdk_pid75896 00:36:37.592 Removing: /var/run/dpdk/spdk_pid75908 00:36:37.592 Removing: /var/run/dpdk/spdk_pid77302 00:36:37.592 Removing: /var/run/dpdk/spdk_pid77393 00:36:37.592 Removing: /var/run/dpdk/spdk_pid78790 00:36:37.592 Removing: /var/run/dpdk/spdk_pid80526 00:36:37.592 Removing: /var/run/dpdk/spdk_pid80583 00:36:37.592 Removing: /var/run/dpdk/spdk_pid80653 00:36:37.592 Removing: /var/run/dpdk/spdk_pid80746 00:36:37.592 Removing: /var/run/dpdk/spdk_pid80832 00:36:37.592 Removing: /var/run/dpdk/spdk_pid80930 00:36:37.592 Removing: /var/run/dpdk/spdk_pid80982 00:36:37.592 Removing: /var/run/dpdk/spdk_pid81052 00:36:37.592 Removing: /var/run/dpdk/spdk_pid81150 00:36:37.592 Removing: /var/run/dpdk/spdk_pid81231 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81321 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81383 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81455 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81548 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81634 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81719 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81782 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81846 00:36:37.854 Removing: /var/run/dpdk/spdk_pid81945 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82031 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82120 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82179 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82242 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82311 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82378 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82477 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82558 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82651 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82703 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82772 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82835 00:36:37.854 Removing: /var/run/dpdk/spdk_pid82908 00:36:37.854 Removing: /var/run/dpdk/spdk_pid83001 00:36:37.854 Removing: /var/run/dpdk/spdk_pid83081 00:36:37.854 Removing: /var/run/dpdk/spdk_pid83219 00:36:37.854 Removing: /var/run/dpdk/spdk_pid83487 00:36:37.854 Removing: /var/run/dpdk/spdk_pid83521 00:36:37.854 Removing: /var/run/dpdk/spdk_pid83967 00:36:37.854 Removing: /var/run/dpdk/spdk_pid84149 00:36:37.854 Removing: /var/run/dpdk/spdk_pid84237 00:36:37.854 Removing: /var/run/dpdk/spdk_pid84338 00:36:37.854 Removing: /var/run/dpdk/spdk_pid84379 00:36:37.854 Removing: /var/run/dpdk/spdk_pid84403 00:36:37.854 Removing: /var/run/dpdk/spdk_pid84709 00:36:37.854 Removing: /var/run/dpdk/spdk_pid84747 00:36:37.854 Removing: /var/run/dpdk/spdk_pid84798 00:36:37.854 Removing: /var/run/dpdk/spdk_pid85164 00:36:37.854 Removing: /var/run/dpdk/spdk_pid85308 00:36:37.854 Removing: /var/run/dpdk/spdk_pid86106 00:36:37.854 Removing: /var/run/dpdk/spdk_pid86222 00:36:37.854 Removing: /var/run/dpdk/spdk_pid86380 00:36:37.854 Removing: /var/run/dpdk/spdk_pid86477 00:36:37.854 Removing: /var/run/dpdk/spdk_pid86780 00:36:37.854 Removing: /var/run/dpdk/spdk_pid87022 00:36:37.854 Removing: /var/run/dpdk/spdk_pid87374 00:36:37.854 Removing: /var/run/dpdk/spdk_pid87534 00:36:37.854 Removing: /var/run/dpdk/spdk_pid87700 00:36:37.854 Removing: /var/run/dpdk/spdk_pid87740 00:36:37.854 Removing: /var/run/dpdk/spdk_pid87940 00:36:37.854 Removing: /var/run/dpdk/spdk_pid87958 00:36:37.854 Removing: /var/run/dpdk/spdk_pid88001 00:36:37.854 Removing: /var/run/dpdk/spdk_pid88270 00:36:37.854 Removing: /var/run/dpdk/spdk_pid88484 00:36:37.854 Removing: /var/run/dpdk/spdk_pid89195 00:36:37.854 Removing: /var/run/dpdk/spdk_pid89857 00:36:37.854 Removing: /var/run/dpdk/spdk_pid90333 00:36:37.854 Removing: /var/run/dpdk/spdk_pid91142 00:36:37.854 Removing: /var/run/dpdk/spdk_pid91284 00:36:37.854 Removing: /var/run/dpdk/spdk_pid91361 00:36:37.854 Removing: /var/run/dpdk/spdk_pid91948 00:36:37.854 Removing: /var/run/dpdk/spdk_pid92006 00:36:37.854 Removing: /var/run/dpdk/spdk_pid92640 00:36:37.854 Removing: /var/run/dpdk/spdk_pid93105 00:36:37.854 Removing: /var/run/dpdk/spdk_pid93894 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94037 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94073 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94120 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94175 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94229 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94406 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94476 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94534 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94589 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94623 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94684 00:36:37.854 Removing: /var/run/dpdk/spdk_pid94854 00:36:37.854 Removing: /var/run/dpdk/spdk_pid95069 00:36:37.854 Removing: /var/run/dpdk/spdk_pid95883 00:36:37.854 Removing: /var/run/dpdk/spdk_pid96826 00:36:37.854 Removing: /var/run/dpdk/spdk_pid97666 00:36:37.854 Removing: /var/run/dpdk/spdk_pid98530 00:36:37.854 Clean 00:36:38.116 00:10:25 -- common/autotest_common.sh@1453 -- # return 0 00:36:38.116 00:10:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:38.116 00:10:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:38.116 00:10:25 -- common/autotest_common.sh@10 -- # set +x 00:36:38.116 00:10:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:38.116 00:10:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:38.116 00:10:26 -- common/autotest_common.sh@10 -- # set +x 00:36:38.116 00:10:26 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:38.116 00:10:26 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:38.116 00:10:26 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:38.116 00:10:26 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:38.116 00:10:26 -- spdk/autotest.sh@398 -- # hostname 00:36:38.116 00:10:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:38.378 geninfo: WARNING: invalid characters removed from testname! 00:37:04.986 00:10:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:07.534 00:10:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:10.074 00:10:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:11.985 00:10:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:14.536 00:11:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:17.086 00:11:05 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:19.626 00:11:07 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:19.626 00:11:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:19.626 00:11:07 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:19.626 00:11:07 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:19.626 00:11:07 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:19.626 00:11:07 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:19.626 + [[ -n 5770 ]] 00:37:19.626 + sudo kill 5770 00:37:19.637 [Pipeline] } 00:37:19.658 [Pipeline] // timeout 00:37:19.665 [Pipeline] } 00:37:19.681 [Pipeline] // stage 00:37:19.688 [Pipeline] } 00:37:19.705 [Pipeline] // catchError 00:37:19.716 [Pipeline] stage 00:37:19.719 [Pipeline] { (Stop VM) 00:37:19.734 [Pipeline] sh 00:37:20.023 + vagrant halt 00:37:23.332 ==> default: Halting domain... 00:37:28.638 [Pipeline] sh 00:37:28.928 + vagrant destroy -f 00:37:31.515 ==> default: Removing domain... 00:37:31.840 [Pipeline] sh 00:37:32.160 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:32.171 [Pipeline] } 00:37:32.188 [Pipeline] // stage 00:37:32.195 [Pipeline] } 00:37:32.212 [Pipeline] // dir 00:37:32.219 [Pipeline] } 00:37:32.243 [Pipeline] // wrap 00:37:32.250 [Pipeline] } 00:37:32.263 [Pipeline] // catchError 00:37:32.272 [Pipeline] stage 00:37:32.274 [Pipeline] { (Epilogue) 00:37:32.288 [Pipeline] sh 00:37:32.574 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:37.865 [Pipeline] catchError 00:37:37.867 [Pipeline] { 00:37:37.881 [Pipeline] sh 00:37:38.166 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:38.427 Artifacts sizes are good 00:37:38.436 [Pipeline] } 00:37:38.450 [Pipeline] // catchError 00:37:38.461 [Pipeline] archiveArtifacts 00:37:38.469 Archiving artifacts 00:37:38.563 [Pipeline] cleanWs 00:37:38.575 [WS-CLEANUP] Deleting project workspace... 00:37:38.575 [WS-CLEANUP] Deferred wipeout is used... 00:37:38.582 [WS-CLEANUP] done 00:37:38.584 [Pipeline] } 00:37:38.599 [Pipeline] // stage 00:37:38.605 [Pipeline] } 00:37:38.619 [Pipeline] // node 00:37:38.624 [Pipeline] End of Pipeline 00:37:38.667 Finished: SUCCESS