00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2234 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3497 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.148 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.149 The recommended git tool is: git 00:00:00.149 using credential 00000000-0000-0000-0000-000000000002 00:00:00.150 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.215 Fetching changes from the remote Git repository 00:00:00.218 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.286 Using shallow fetch with depth 1 00:00:00.286 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.286 > git --version # timeout=10 00:00:00.339 > git --version # 'git version 2.39.2' 00:00:00.339 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.365 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.365 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.993 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.007 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.020 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:08.020 > git config core.sparsecheckout # timeout=10 00:00:08.032 > git read-tree -mu HEAD # timeout=10 00:00:08.048 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:08.066 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:08.066 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:08.152 [Pipeline] Start of Pipeline 00:00:08.162 [Pipeline] library 00:00:08.163 Loading library shm_lib@master 00:00:08.163 Library shm_lib@master is cached. Copying from home. 00:00:08.175 [Pipeline] node 00:00:08.186 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.188 [Pipeline] { 00:00:08.196 [Pipeline] catchError 00:00:08.197 [Pipeline] { 00:00:08.206 [Pipeline] wrap 00:00:08.213 [Pipeline] { 00:00:08.221 [Pipeline] stage 00:00:08.223 [Pipeline] { (Prologue) 00:00:08.243 [Pipeline] echo 00:00:08.244 Node: VM-host-SM38 00:00:08.250 [Pipeline] cleanWs 00:00:08.261 [WS-CLEANUP] Deleting project workspace... 00:00:08.261 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.268 [WS-CLEANUP] done 00:00:08.510 [Pipeline] setCustomBuildProperty 00:00:08.587 [Pipeline] httpRequest 00:00:08.956 [Pipeline] echo 00:00:08.958 Sorcerer 10.211.164.101 is alive 00:00:08.968 [Pipeline] retry 00:00:08.971 [Pipeline] { 00:00:08.989 [Pipeline] httpRequest 00:00:08.995 HttpMethod: GET 00:00:08.995 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.996 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:09.006 Response Code: HTTP/1.1 200 OK 00:00:09.007 Success: Status code 200 is in the accepted range: 200,404 00:00:09.007 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:11.258 [Pipeline] } 00:00:11.274 [Pipeline] // retry 00:00:11.282 [Pipeline] sh 00:00:11.566 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:11.585 [Pipeline] httpRequest 00:00:12.255 [Pipeline] echo 00:00:12.257 Sorcerer 10.211.164.101 is alive 00:00:12.266 [Pipeline] retry 00:00:12.268 [Pipeline] { 00:00:12.284 [Pipeline] httpRequest 00:00:12.290 HttpMethod: GET 00:00:12.290 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:12.291 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:12.306 Response Code: HTTP/1.1 200 OK 00:00:12.307 Success: Status code 200 is in the accepted range: 200,404 00:00:12.307 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:47.895 [Pipeline] } 00:00:47.913 [Pipeline] // retry 00:00:47.920 [Pipeline] sh 00:00:48.233 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:50.786 [Pipeline] sh 00:00:51.070 + git -C spdk log --oneline -n5 00:00:51.070 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:51.070 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:51.070 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:51.070 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:51.070 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:51.092 [Pipeline] withCredentials 00:00:51.104 > git --version # timeout=10 00:00:51.119 > git --version # 'git version 2.39.2' 00:00:51.139 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:51.141 [Pipeline] { 00:00:51.152 [Pipeline] retry 00:00:51.154 [Pipeline] { 00:00:51.170 [Pipeline] sh 00:00:51.455 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:51.469 [Pipeline] } 00:00:51.518 [Pipeline] // retry 00:00:51.523 [Pipeline] } 00:00:51.540 [Pipeline] // withCredentials 00:00:51.550 [Pipeline] httpRequest 00:00:51.952 [Pipeline] echo 00:00:51.953 Sorcerer 10.211.164.101 is alive 00:00:51.961 [Pipeline] retry 00:00:51.963 [Pipeline] { 00:00:51.975 [Pipeline] httpRequest 00:00:51.981 HttpMethod: GET 00:00:51.981 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.982 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.988 Response Code: HTTP/1.1 200 OK 00:00:51.988 Success: Status code 200 is in the accepted range: 200,404 00:00:51.989 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.605 [Pipeline] } 00:01:36.623 [Pipeline] // retry 00:01:36.631 [Pipeline] sh 00:01:36.917 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.848 [Pipeline] sh 00:01:39.133 + git -C dpdk log --oneline -n5 00:01:39.133 caf0f5d395 version: 22.11.4 00:01:39.133 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:39.133 dc9c799c7d vhost: fix missing spinlock unlock 00:01:39.133 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:39.133 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:39.153 [Pipeline] writeFile 00:01:39.168 [Pipeline] sh 00:01:39.455 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:39.468 [Pipeline] sh 00:01:39.754 + cat autorun-spdk.conf 00:01:39.754 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.754 SPDK_TEST_NVME=1 00:01:39.754 SPDK_TEST_FTL=1 00:01:39.754 SPDK_TEST_ISAL=1 00:01:39.754 SPDK_RUN_ASAN=1 00:01:39.754 SPDK_RUN_UBSAN=1 00:01:39.754 SPDK_TEST_XNVME=1 00:01:39.754 SPDK_TEST_NVME_FDP=1 00:01:39.754 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:39.754 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.754 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.763 RUN_NIGHTLY=1 00:01:39.765 [Pipeline] } 00:01:39.779 [Pipeline] // stage 00:01:39.795 [Pipeline] stage 00:01:39.797 [Pipeline] { (Run VM) 00:01:39.810 [Pipeline] sh 00:01:40.095 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:40.095 + echo 'Start stage prepare_nvme.sh' 00:01:40.095 Start stage prepare_nvme.sh 00:01:40.095 + [[ -n 10 ]] 00:01:40.095 + disk_prefix=ex10 00:01:40.095 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:40.095 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:40.095 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:40.095 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.095 ++ SPDK_TEST_NVME=1 00:01:40.095 ++ SPDK_TEST_FTL=1 00:01:40.095 ++ SPDK_TEST_ISAL=1 00:01:40.095 ++ SPDK_RUN_ASAN=1 00:01:40.095 ++ SPDK_RUN_UBSAN=1 00:01:40.095 ++ SPDK_TEST_XNVME=1 00:01:40.095 ++ SPDK_TEST_NVME_FDP=1 00:01:40.095 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.095 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:40.095 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.095 ++ RUN_NIGHTLY=1 00:01:40.095 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:40.095 + nvme_files=() 00:01:40.095 + declare -A nvme_files 00:01:40.095 + backend_dir=/var/lib/libvirt/images/backends 00:01:40.095 + nvme_files['nvme.img']=5G 00:01:40.095 + nvme_files['nvme-cmb.img']=5G 00:01:40.095 + nvme_files['nvme-multi0.img']=4G 00:01:40.095 + nvme_files['nvme-multi1.img']=4G 00:01:40.095 + nvme_files['nvme-multi2.img']=4G 00:01:40.095 + nvme_files['nvme-openstack.img']=8G 00:01:40.095 + nvme_files['nvme-zns.img']=5G 00:01:40.095 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:40.095 + (( SPDK_TEST_FTL == 1 )) 00:01:40.095 + nvme_files["nvme-ftl.img"]=6G 00:01:40.095 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:40.095 + nvme_files["nvme-fdp.img"]=1G 00:01:40.095 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:40.095 + for nvme in "${!nvme_files[@]}" 00:01:40.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:40.095 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.095 + for nvme in "${!nvme_files[@]}" 00:01:40.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:40.668 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:40.668 + for nvme in "${!nvme_files[@]}" 00:01:40.668 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:40.668 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.668 + for nvme in "${!nvme_files[@]}" 00:01:40.668 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:40.928 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:40.928 + for nvme in "${!nvme_files[@]}" 00:01:40.928 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:40.928 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.928 + for nvme in "${!nvme_files[@]}" 00:01:40.928 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:40.928 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.928 + for nvme in "${!nvme_files[@]}" 00:01:40.928 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:40.928 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.928 + for nvme in "${!nvme_files[@]}" 00:01:40.928 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:41.190 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:41.190 + for nvme in "${!nvme_files[@]}" 00:01:41.190 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:41.190 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.190 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:41.190 + echo 'End stage prepare_nvme.sh' 00:01:41.190 End stage prepare_nvme.sh 00:01:41.204 [Pipeline] sh 00:01:41.490 + DISTRO=fedora39 00:01:41.490 + CPUS=10 00:01:41.490 + RAM=12288 00:01:41.490 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:41.490 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:41.490 00:01:41.490 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:41.490 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:41.490 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:41.490 HELP=0 00:01:41.490 DRY_RUN=0 00:01:41.490 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:41.490 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:41.490 NVME_AUTO_CREATE=0 00:01:41.490 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:41.490 NVME_CMB=,,,, 00:01:41.490 NVME_PMR=,,,, 00:01:41.490 NVME_ZNS=,,,, 00:01:41.490 NVME_MS=true,,,, 00:01:41.490 NVME_FDP=,,,on, 00:01:41.490 SPDK_VAGRANT_DISTRO=fedora39 00:01:41.490 SPDK_VAGRANT_VMCPU=10 00:01:41.490 SPDK_VAGRANT_VMRAM=12288 00:01:41.490 SPDK_VAGRANT_PROVIDER=libvirt 00:01:41.490 SPDK_VAGRANT_HTTP_PROXY= 00:01:41.490 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:41.490 SPDK_OPENSTACK_NETWORK=0 00:01:41.490 VAGRANT_PACKAGE_BOX=0 00:01:41.490 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:41.490 FORCE_DISTRO=true 00:01:41.490 VAGRANT_BOX_VERSION= 00:01:41.490 EXTRA_VAGRANTFILES= 00:01:41.490 NIC_MODEL=e1000 00:01:41.490 00:01:41.490 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:41.490 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:44.034 Bringing machine 'default' up with 'libvirt' provider... 00:01:44.291 ==> default: Creating image (snapshot of base box volume). 00:01:44.549 ==> default: Creating domain with the following settings... 00:01:44.549 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727762049_201e1aeadfe11421a91f 00:01:44.549 ==> default: -- Domain type: kvm 00:01:44.549 ==> default: -- Cpus: 10 00:01:44.549 ==> default: -- Feature: acpi 00:01:44.549 ==> default: -- Feature: apic 00:01:44.549 ==> default: -- Feature: pae 00:01:44.549 ==> default: -- Memory: 12288M 00:01:44.549 ==> default: -- Memory Backing: hugepages: 00:01:44.549 ==> default: -- Management MAC: 00:01:44.549 ==> default: -- Loader: 00:01:44.549 ==> default: -- Nvram: 00:01:44.549 ==> default: -- Base box: spdk/fedora39 00:01:44.549 ==> default: -- Storage pool: default 00:01:44.549 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727762049_201e1aeadfe11421a91f.img (20G) 00:01:44.549 ==> default: -- Volume Cache: default 00:01:44.549 ==> default: -- Kernel: 00:01:44.549 ==> default: -- Initrd: 00:01:44.549 ==> default: -- Graphics Type: vnc 00:01:44.549 ==> default: -- Graphics Port: -1 00:01:44.549 ==> default: -- Graphics IP: 127.0.0.1 00:01:44.549 ==> default: -- Graphics Password: Not defined 00:01:44.549 ==> default: -- Video Type: cirrus 00:01:44.549 ==> default: -- Video VRAM: 9216 00:01:44.549 ==> default: -- Sound Type: 00:01:44.549 ==> default: -- Keymap: en-us 00:01:44.549 ==> default: -- TPM Path: 00:01:44.549 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:44.549 ==> default: -- Command line args: 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:44.549 ==> default: -> value=-drive, 00:01:44.549 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:44.549 ==> default: -> value=-drive, 00:01:44.549 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:44.549 ==> default: -> value=-drive, 00:01:44.549 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.549 ==> default: -> value=-drive, 00:01:44.549 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.549 ==> default: -> value=-drive, 00:01:44.549 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:44.549 ==> default: -> value=-drive, 00:01:44.549 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:44.549 ==> default: -> value=-device, 00:01:44.549 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.549 ==> default: Creating shared folders metadata... 00:01:44.549 ==> default: Starting domain. 00:01:46.463 ==> default: Waiting for domain to get an IP address... 00:02:04.645 ==> default: Waiting for SSH to become available... 00:02:04.645 ==> default: Configuring and enabling network interfaces... 00:02:07.947 default: SSH address: 192.168.121.102:22 00:02:07.947 default: SSH username: vagrant 00:02:07.947 default: SSH auth method: private key 00:02:09.861 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:18.043 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:23.335 ==> default: Mounting SSHFS shared folder... 00:02:25.881 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:25.881 ==> default: Checking Mount.. 00:02:26.822 ==> default: Folder Successfully Mounted! 00:02:26.822 00:02:26.822 SUCCESS! 00:02:26.822 00:02:26.822 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:26.822 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:26.822 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:26.822 00:02:26.833 [Pipeline] } 00:02:26.848 [Pipeline] // stage 00:02:26.878 [Pipeline] dir 00:02:26.879 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:26.880 [Pipeline] { 00:02:26.892 [Pipeline] catchError 00:02:26.893 [Pipeline] { 00:02:26.905 [Pipeline] sh 00:02:27.186 + vagrant ssh-config --host vagrant 00:02:27.186 + sed -ne '/^Host/,$p' 00:02:27.186 + tee ssh_conf 00:02:30.492 Host vagrant 00:02:30.492 HostName 192.168.121.102 00:02:30.492 User vagrant 00:02:30.492 Port 22 00:02:30.492 UserKnownHostsFile /dev/null 00:02:30.492 StrictHostKeyChecking no 00:02:30.492 PasswordAuthentication no 00:02:30.492 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:30.492 IdentitiesOnly yes 00:02:30.492 LogLevel FATAL 00:02:30.492 ForwardAgent yes 00:02:30.492 ForwardX11 yes 00:02:30.492 00:02:30.509 [Pipeline] withEnv 00:02:30.511 [Pipeline] { 00:02:30.527 [Pipeline] sh 00:02:30.815 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:30.815 source /etc/os-release 00:02:30.815 [[ -e /image.version ]] && img=$(< /image.version) 00:02:30.815 # Minimal, systemd-like check. 00:02:30.815 if [[ -e /.dockerenv ]]; then 00:02:30.815 # Clear garbage from the node'\''s name: 00:02:30.815 # agt-er_autotest_547-896 -> autotest_547-896 00:02:30.815 # $HOSTNAME is the actual container id 00:02:30.815 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:30.815 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:30.815 # We can assume this is a mount from a host where container is running, 00:02:30.815 # so fetch its hostname to easily identify the target swarm worker. 00:02:30.815 container="$(< /etc/hostname) ($agent)" 00:02:30.815 else 00:02:30.815 # Fallback 00:02:30.815 container=$agent 00:02:30.815 fi 00:02:30.815 fi 00:02:30.815 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:30.815 ' 00:02:31.092 [Pipeline] } 00:02:31.112 [Pipeline] // withEnv 00:02:31.120 [Pipeline] setCustomBuildProperty 00:02:31.137 [Pipeline] stage 00:02:31.139 [Pipeline] { (Tests) 00:02:31.158 [Pipeline] sh 00:02:31.441 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:31.718 [Pipeline] sh 00:02:32.098 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:32.115 [Pipeline] timeout 00:02:32.115 Timeout set to expire in 50 min 00:02:32.117 [Pipeline] { 00:02:32.134 [Pipeline] sh 00:02:32.418 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:32.707 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:32.719 [Pipeline] sh 00:02:33.003 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:33.276 [Pipeline] sh 00:02:33.563 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:33.841 [Pipeline] sh 00:02:34.123 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:34.383 ++ readlink -f spdk_repo 00:02:34.383 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:34.383 + [[ -n /home/vagrant/spdk_repo ]] 00:02:34.383 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:34.383 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:34.383 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:34.383 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:34.383 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:34.383 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:34.383 + cd /home/vagrant/spdk_repo 00:02:34.383 + source /etc/os-release 00:02:34.383 ++ NAME='Fedora Linux' 00:02:34.383 ++ VERSION='39 (Cloud Edition)' 00:02:34.383 ++ ID=fedora 00:02:34.383 ++ VERSION_ID=39 00:02:34.383 ++ VERSION_CODENAME= 00:02:34.383 ++ PLATFORM_ID=platform:f39 00:02:34.383 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:34.383 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:34.383 ++ LOGO=fedora-logo-icon 00:02:34.383 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:34.383 ++ HOME_URL=https://fedoraproject.org/ 00:02:34.383 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:34.383 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:34.383 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:34.383 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:34.383 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:34.383 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:34.383 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:34.383 ++ SUPPORT_END=2024-11-12 00:02:34.383 ++ VARIANT='Cloud Edition' 00:02:34.383 ++ VARIANT_ID=cloud 00:02:34.383 + uname -a 00:02:34.383 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:34.383 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:34.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:34.902 Hugepages 00:02:34.902 node hugesize free / total 00:02:34.902 node0 1048576kB 0 / 0 00:02:34.902 node0 2048kB 0 / 0 00:02:34.902 00:02:34.902 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.902 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:34.902 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:34.902 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:35.162 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:35.162 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:35.162 + rm -f /tmp/spdk-ld-path 00:02:35.162 + source autorun-spdk.conf 00:02:35.162 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.162 ++ SPDK_TEST_NVME=1 00:02:35.162 ++ SPDK_TEST_FTL=1 00:02:35.162 ++ SPDK_TEST_ISAL=1 00:02:35.162 ++ SPDK_RUN_ASAN=1 00:02:35.162 ++ SPDK_RUN_UBSAN=1 00:02:35.162 ++ SPDK_TEST_XNVME=1 00:02:35.162 ++ SPDK_TEST_NVME_FDP=1 00:02:35.162 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.162 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.162 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.162 ++ RUN_NIGHTLY=1 00:02:35.162 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.162 + [[ -n '' ]] 00:02:35.162 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.162 + for M in /var/spdk/build-*-manifest.txt 00:02:35.162 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.162 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.162 + for M in /var/spdk/build-*-manifest.txt 00:02:35.162 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.162 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.162 + for M in /var/spdk/build-*-manifest.txt 00:02:35.162 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.162 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.162 ++ uname 00:02:35.162 + [[ Linux == \L\i\n\u\x ]] 00:02:35.162 + sudo dmesg -T 00:02:35.162 + sudo dmesg --clear 00:02:35.162 + dmesg_pid=5756 00:02:35.162 + sudo dmesg -Tw 00:02:35.162 + [[ Fedora Linux == FreeBSD ]] 00:02:35.162 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.162 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.162 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.162 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.162 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.162 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.162 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.162 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.162 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.162 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.162 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.162 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.162 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.162 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.162 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.162 Test configuration: 00:02:35.162 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.162 SPDK_TEST_NVME=1 00:02:35.162 SPDK_TEST_FTL=1 00:02:35.162 SPDK_TEST_ISAL=1 00:02:35.162 SPDK_RUN_ASAN=1 00:02:35.162 SPDK_RUN_UBSAN=1 00:02:35.162 SPDK_TEST_XNVME=1 00:02:35.162 SPDK_TEST_NVME_FDP=1 00:02:35.162 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:35.162 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.162 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.162 RUN_NIGHTLY=1 05:55:00 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:35.162 05:55:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.162 05:55:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:35.162 05:55:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.162 05:55:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.162 05:55:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.162 05:55:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.162 05:55:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.162 05:55:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.162 05:55:00 -- paths/export.sh@5 -- $ export PATH 00:02:35.162 05:55:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.162 05:55:00 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.162 05:55:00 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:35.422 05:55:00 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727762100.XXXXXX 00:02:35.422 05:55:00 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727762100.egBOKr 00:02:35.422 05:55:00 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:35.422 05:55:00 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:35.422 05:55:00 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:35.422 05:55:00 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:35.422 05:55:00 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.422 05:55:00 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.422 05:55:00 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:35.422 05:55:00 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:35.422 05:55:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.422 05:55:00 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:35.422 05:55:00 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:35.422 05:55:00 -- pm/common@17 -- $ local monitor 00:02:35.422 05:55:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.422 05:55:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.422 05:55:00 -- pm/common@25 -- $ sleep 1 00:02:35.422 05:55:00 -- pm/common@21 -- $ date +%s 00:02:35.422 05:55:00 -- pm/common@21 -- $ date +%s 00:02:35.422 05:55:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727762100 00:02:35.422 05:55:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727762100 00:02:35.422 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727762100_collect-vmstat.pm.log 00:02:35.422 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727762100_collect-cpu-load.pm.log 00:02:36.364 05:55:01 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:36.364 05:55:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.364 05:55:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.364 05:55:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.364 05:55:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.364 Tue Oct 1 05:55:01 AM UTC 2024 00:02:36.364 05:55:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.364 v25.01-pre-17-g09cc66129 00:02:36.364 05:55:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:36.364 05:55:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:36.364 05:55:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:36.364 05:55:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:36.364 05:55:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.364 ************************************ 00:02:36.364 START TEST asan 00:02:36.364 ************************************ 00:02:36.364 using asan 00:02:36.364 05:55:01 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:36.364 00:02:36.364 real 0m0.000s 00:02:36.364 user 0m0.000s 00:02:36.364 sys 0m0.000s 00:02:36.364 05:55:01 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:36.364 ************************************ 00:02:36.364 05:55:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.364 END TEST asan 00:02:36.364 ************************************ 00:02:36.364 05:55:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.364 05:55:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.364 05:55:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:36.364 05:55:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:36.364 05:55:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.364 ************************************ 00:02:36.364 START TEST ubsan 00:02:36.364 ************************************ 00:02:36.364 using ubsan 00:02:36.364 05:55:01 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:36.364 00:02:36.364 real 0m0.000s 00:02:36.364 user 0m0.000s 00:02:36.364 sys 0m0.000s 00:02:36.364 05:55:01 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:36.364 05:55:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.364 ************************************ 00:02:36.364 END TEST ubsan 00:02:36.364 ************************************ 00:02:36.364 05:55:01 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:36.364 05:55:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:36.364 05:55:01 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:36.364 05:55:01 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:36.364 05:55:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:36.364 05:55:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.364 ************************************ 00:02:36.364 START TEST build_native_dpdk 00:02:36.364 ************************************ 00:02:36.364 05:55:01 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:36.364 05:55:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:36.365 caf0f5d395 version: 22.11.4 00:02:36.365 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:36.365 dc9c799c7d vhost: fix missing spinlock unlock 00:02:36.365 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:36.365 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:36.365 patching file config/rte_config.h 00:02:36.365 Hunk #1 succeeded at 60 (offset 1 line). 00:02:36.365 05:55:01 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.365 05:55:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:36.626 05:55:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:36.626 patching file lib/pcapng/rte_pcapng.c 00:02:36.626 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:36.626 05:55:01 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.626 05:55:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.627 05:55:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:36.627 05:55:01 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:36.627 05:55:02 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:36.627 05:55:02 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:36.627 05:55:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:36.627 05:55:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:40.840 The Meson build system 00:02:40.840 Version: 1.5.0 00:02:40.840 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:40.840 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:40.840 Build type: native build 00:02:40.840 Program cat found: YES (/usr/bin/cat) 00:02:40.840 Project name: DPDK 00:02:40.840 Project version: 22.11.4 00:02:40.840 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.840 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:40.840 Host machine cpu family: x86_64 00:02:40.840 Host machine cpu: x86_64 00:02:40.840 Message: ## Building in Developer Mode ## 00:02:40.840 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.840 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:40.840 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.840 Program objdump found: YES (/usr/bin/objdump) 00:02:40.840 Program python3 found: YES (/usr/bin/python3) 00:02:40.840 Program cat found: YES (/usr/bin/cat) 00:02:40.840 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:40.840 Checking for size of "void *" : 8 00:02:40.840 Checking for size of "void *" : 8 (cached) 00:02:40.840 Library m found: YES 00:02:40.840 Library numa found: YES 00:02:40.840 Has header "numaif.h" : YES 00:02:40.840 Library fdt found: NO 00:02:40.840 Library execinfo found: NO 00:02:40.840 Has header "execinfo.h" : YES 00:02:40.840 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.840 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.840 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.840 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.840 Run-time dependency openssl found: YES 3.1.1 00:02:40.840 Run-time dependency libpcap found: YES 1.10.4 00:02:40.840 Has header "pcap.h" with dependency libpcap: YES 00:02:40.840 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.840 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.840 Compiler for C supports arguments -Wformat: YES 00:02:40.840 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.840 Compiler for C supports arguments -Wformat-security: NO 00:02:40.840 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.840 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.840 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.840 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.840 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.840 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.840 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.840 Compiler for C supports arguments -Wundef: YES 00:02:40.840 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.840 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.840 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.840 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.840 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.840 Compiler for C supports arguments -mavx512f: YES 00:02:40.840 Checking if "AVX512 checking" compiles: YES 00:02:40.840 Fetching value of define "__SSE4_2__" : 1 00:02:40.840 Fetching value of define "__AES__" : 1 00:02:40.840 Fetching value of define "__AVX__" : 1 00:02:40.840 Fetching value of define "__AVX2__" : 1 00:02:40.840 Fetching value of define "__AVX512BW__" : 1 00:02:40.840 Fetching value of define "__AVX512CD__" : 1 00:02:40.840 Fetching value of define "__AVX512DQ__" : 1 00:02:40.840 Fetching value of define "__AVX512F__" : 1 00:02:40.840 Fetching value of define "__AVX512VL__" : 1 00:02:40.840 Fetching value of define "__PCLMUL__" : 1 00:02:40.840 Fetching value of define "__RDRND__" : 1 00:02:40.840 Fetching value of define "__RDSEED__" : 1 00:02:40.840 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:40.840 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.840 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.840 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.840 Checking for function "getentropy" : YES 00:02:40.840 Message: lib/eal: Defining dependency "eal" 00:02:40.840 Message: lib/ring: Defining dependency "ring" 00:02:40.840 Message: lib/rcu: Defining dependency "rcu" 00:02:40.840 Message: lib/mempool: Defining dependency "mempool" 00:02:40.840 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.840 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.840 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:40.840 Compiler for C supports arguments -mpclmul: YES 00:02:40.840 Compiler for C supports arguments -maes: YES 00:02:40.840 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.840 Compiler for C supports arguments -mavx512bw: YES 00:02:40.840 Compiler for C supports arguments -mavx512dq: YES 00:02:40.840 Compiler for C supports arguments -mavx512vl: YES 00:02:40.840 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.840 Compiler for C supports arguments -mavx2: YES 00:02:40.840 Compiler for C supports arguments -mavx: YES 00:02:40.840 Message: lib/net: Defining dependency "net" 00:02:40.840 Message: lib/meter: Defining dependency "meter" 00:02:40.840 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.840 Message: lib/pci: Defining dependency "pci" 00:02:40.840 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.840 Message: lib/metrics: Defining dependency "metrics" 00:02:40.840 Message: lib/hash: Defining dependency "hash" 00:02:40.840 Message: lib/timer: Defining dependency "timer" 00:02:40.840 Fetching value of define "__AVX2__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.840 Message: lib/acl: Defining dependency "acl" 00:02:40.840 Message: lib/bbdev: Defining dependency "bbdev" 00:02:40.840 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:40.840 Run-time dependency libelf found: YES 0.191 00:02:40.840 Message: lib/bpf: Defining dependency "bpf" 00:02:40.840 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:40.840 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.840 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.840 Message: lib/distributor: Defining dependency "distributor" 00:02:40.840 Message: lib/efd: Defining dependency "efd" 00:02:40.840 Message: lib/eventdev: Defining dependency "eventdev" 00:02:40.840 Message: lib/gpudev: Defining dependency "gpudev" 00:02:40.840 Message: lib/gro: Defining dependency "gro" 00:02:40.840 Message: lib/gso: Defining dependency "gso" 00:02:40.840 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:40.840 Message: lib/jobstats: Defining dependency "jobstats" 00:02:40.840 Message: lib/latencystats: Defining dependency "latencystats" 00:02:40.840 Message: lib/lpm: Defining dependency "lpm" 00:02:40.840 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512IFMA__" : 1 00:02:40.840 Message: lib/member: Defining dependency "member" 00:02:40.840 Message: lib/pcapng: Defining dependency "pcapng" 00:02:40.840 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.840 Message: lib/power: Defining dependency "power" 00:02:40.840 Message: lib/rawdev: Defining dependency "rawdev" 00:02:40.840 Message: lib/regexdev: Defining dependency "regexdev" 00:02:40.840 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.840 Message: lib/rib: Defining dependency "rib" 00:02:40.840 Message: lib/reorder: Defining dependency "reorder" 00:02:40.840 Message: lib/sched: Defining dependency "sched" 00:02:40.840 Message: lib/security: Defining dependency "security" 00:02:40.840 Message: lib/stack: Defining dependency "stack" 00:02:40.840 Has header "linux/userfaultfd.h" : YES 00:02:40.840 Message: lib/vhost: Defining dependency "vhost" 00:02:40.840 Message: lib/ipsec: Defining dependency "ipsec" 00:02:40.840 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.840 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.840 Message: lib/fib: Defining dependency "fib" 00:02:40.840 Message: lib/port: Defining dependency "port" 00:02:40.840 Message: lib/pdump: Defining dependency "pdump" 00:02:40.841 Message: lib/table: Defining dependency "table" 00:02:40.841 Message: lib/pipeline: Defining dependency "pipeline" 00:02:40.841 Message: lib/graph: Defining dependency "graph" 00:02:40.841 Message: lib/node: Defining dependency "node" 00:02:40.841 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.841 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.841 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.841 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.841 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:40.841 Compiler for C supports arguments -Wno-unused-value: YES 00:02:40.841 Compiler for C supports arguments -Wno-format: YES 00:02:40.841 Compiler for C supports arguments -Wno-format-security: YES 00:02:40.841 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:40.841 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:40.841 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:40.841 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:42.227 Fetching value of define "__AVX2__" : 1 (cached) 00:02:42.227 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:42.227 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:42.227 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.227 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:42.227 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:42.227 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:42.227 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:42.227 Configuring doxy-api.conf using configuration 00:02:42.227 Program sphinx-build found: NO 00:02:42.227 Configuring rte_build_config.h using configuration 00:02:42.227 Message: 00:02:42.227 ================= 00:02:42.227 Applications Enabled 00:02:42.227 ================= 00:02:42.227 00:02:42.227 apps: 00:02:42.227 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:42.228 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:42.228 test-security-perf, 00:02:42.228 00:02:42.228 Message: 00:02:42.228 ================= 00:02:42.228 Libraries Enabled 00:02:42.228 ================= 00:02:42.228 00:02:42.228 libs: 00:02:42.228 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:42.228 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:42.228 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:42.228 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:42.228 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:42.228 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:42.228 table, pipeline, graph, node, 00:02:42.228 00:02:42.228 Message: 00:02:42.228 =============== 00:02:42.228 Drivers Enabled 00:02:42.228 =============== 00:02:42.228 00:02:42.228 common: 00:02:42.228 00:02:42.228 bus: 00:02:42.228 pci, vdev, 00:02:42.228 mempool: 00:02:42.228 ring, 00:02:42.228 dma: 00:02:42.228 00:02:42.228 net: 00:02:42.228 i40e, 00:02:42.228 raw: 00:02:42.228 00:02:42.228 crypto: 00:02:42.228 00:02:42.228 compress: 00:02:42.228 00:02:42.228 regex: 00:02:42.228 00:02:42.228 vdpa: 00:02:42.228 00:02:42.228 event: 00:02:42.228 00:02:42.228 baseband: 00:02:42.228 00:02:42.228 gpu: 00:02:42.228 00:02:42.228 00:02:42.228 Message: 00:02:42.228 ================= 00:02:42.228 Content Skipped 00:02:42.228 ================= 00:02:42.228 00:02:42.228 apps: 00:02:42.228 00:02:42.228 libs: 00:02:42.228 kni: explicitly disabled via build config (deprecated lib) 00:02:42.228 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:42.228 00:02:42.228 drivers: 00:02:42.228 common/cpt: not in enabled drivers build config 00:02:42.228 common/dpaax: not in enabled drivers build config 00:02:42.228 common/iavf: not in enabled drivers build config 00:02:42.228 common/idpf: not in enabled drivers build config 00:02:42.228 common/mvep: not in enabled drivers build config 00:02:42.228 common/octeontx: not in enabled drivers build config 00:02:42.228 bus/auxiliary: not in enabled drivers build config 00:02:42.228 bus/dpaa: not in enabled drivers build config 00:02:42.228 bus/fslmc: not in enabled drivers build config 00:02:42.228 bus/ifpga: not in enabled drivers build config 00:02:42.228 bus/vmbus: not in enabled drivers build config 00:02:42.228 common/cnxk: not in enabled drivers build config 00:02:42.228 common/mlx5: not in enabled drivers build config 00:02:42.228 common/qat: not in enabled drivers build config 00:02:42.228 common/sfc_efx: not in enabled drivers build config 00:02:42.228 mempool/bucket: not in enabled drivers build config 00:02:42.228 mempool/cnxk: not in enabled drivers build config 00:02:42.228 mempool/dpaa: not in enabled drivers build config 00:02:42.228 mempool/dpaa2: not in enabled drivers build config 00:02:42.228 mempool/octeontx: not in enabled drivers build config 00:02:42.228 mempool/stack: not in enabled drivers build config 00:02:42.228 dma/cnxk: not in enabled drivers build config 00:02:42.228 dma/dpaa: not in enabled drivers build config 00:02:42.228 dma/dpaa2: not in enabled drivers build config 00:02:42.228 dma/hisilicon: not in enabled drivers build config 00:02:42.228 dma/idxd: not in enabled drivers build config 00:02:42.228 dma/ioat: not in enabled drivers build config 00:02:42.228 dma/skeleton: not in enabled drivers build config 00:02:42.228 net/af_packet: not in enabled drivers build config 00:02:42.228 net/af_xdp: not in enabled drivers build config 00:02:42.228 net/ark: not in enabled drivers build config 00:02:42.228 net/atlantic: not in enabled drivers build config 00:02:42.228 net/avp: not in enabled drivers build config 00:02:42.228 net/axgbe: not in enabled drivers build config 00:02:42.228 net/bnx2x: not in enabled drivers build config 00:02:42.228 net/bnxt: not in enabled drivers build config 00:02:42.228 net/bonding: not in enabled drivers build config 00:02:42.228 net/cnxk: not in enabled drivers build config 00:02:42.228 net/cxgbe: not in enabled drivers build config 00:02:42.228 net/dpaa: not in enabled drivers build config 00:02:42.228 net/dpaa2: not in enabled drivers build config 00:02:42.228 net/e1000: not in enabled drivers build config 00:02:42.228 net/ena: not in enabled drivers build config 00:02:42.228 net/enetc: not in enabled drivers build config 00:02:42.228 net/enetfec: not in enabled drivers build config 00:02:42.228 net/enic: not in enabled drivers build config 00:02:42.228 net/failsafe: not in enabled drivers build config 00:02:42.228 net/fm10k: not in enabled drivers build config 00:02:42.228 net/gve: not in enabled drivers build config 00:02:42.228 net/hinic: not in enabled drivers build config 00:02:42.228 net/hns3: not in enabled drivers build config 00:02:42.228 net/iavf: not in enabled drivers build config 00:02:42.228 net/ice: not in enabled drivers build config 00:02:42.228 net/idpf: not in enabled drivers build config 00:02:42.228 net/igc: not in enabled drivers build config 00:02:42.228 net/ionic: not in enabled drivers build config 00:02:42.228 net/ipn3ke: not in enabled drivers build config 00:02:42.228 net/ixgbe: not in enabled drivers build config 00:02:42.228 net/kni: not in enabled drivers build config 00:02:42.228 net/liquidio: not in enabled drivers build config 00:02:42.228 net/mana: not in enabled drivers build config 00:02:42.228 net/memif: not in enabled drivers build config 00:02:42.228 net/mlx4: not in enabled drivers build config 00:02:42.228 net/mlx5: not in enabled drivers build config 00:02:42.228 net/mvneta: not in enabled drivers build config 00:02:42.228 net/mvpp2: not in enabled drivers build config 00:02:42.228 net/netvsc: not in enabled drivers build config 00:02:42.228 net/nfb: not in enabled drivers build config 00:02:42.228 net/nfp: not in enabled drivers build config 00:02:42.228 net/ngbe: not in enabled drivers build config 00:02:42.228 net/null: not in enabled drivers build config 00:02:42.228 net/octeontx: not in enabled drivers build config 00:02:42.228 net/octeon_ep: not in enabled drivers build config 00:02:42.228 net/pcap: not in enabled drivers build config 00:02:42.228 net/pfe: not in enabled drivers build config 00:02:42.228 net/qede: not in enabled drivers build config 00:02:42.228 net/ring: not in enabled drivers build config 00:02:42.228 net/sfc: not in enabled drivers build config 00:02:42.228 net/softnic: not in enabled drivers build config 00:02:42.228 net/tap: not in enabled drivers build config 00:02:42.228 net/thunderx: not in enabled drivers build config 00:02:42.228 net/txgbe: not in enabled drivers build config 00:02:42.228 net/vdev_netvsc: not in enabled drivers build config 00:02:42.228 net/vhost: not in enabled drivers build config 00:02:42.228 net/virtio: not in enabled drivers build config 00:02:42.228 net/vmxnet3: not in enabled drivers build config 00:02:42.228 raw/cnxk_bphy: not in enabled drivers build config 00:02:42.228 raw/cnxk_gpio: not in enabled drivers build config 00:02:42.228 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:42.228 raw/ifpga: not in enabled drivers build config 00:02:42.228 raw/ntb: not in enabled drivers build config 00:02:42.228 raw/skeleton: not in enabled drivers build config 00:02:42.228 crypto/armv8: not in enabled drivers build config 00:02:42.228 crypto/bcmfs: not in enabled drivers build config 00:02:42.228 crypto/caam_jr: not in enabled drivers build config 00:02:42.228 crypto/ccp: not in enabled drivers build config 00:02:42.228 crypto/cnxk: not in enabled drivers build config 00:02:42.228 crypto/dpaa_sec: not in enabled drivers build config 00:02:42.228 crypto/dpaa2_sec: not in enabled drivers build config 00:02:42.228 crypto/ipsec_mb: not in enabled drivers build config 00:02:42.228 crypto/mlx5: not in enabled drivers build config 00:02:42.228 crypto/mvsam: not in enabled drivers build config 00:02:42.228 crypto/nitrox: not in enabled drivers build config 00:02:42.228 crypto/null: not in enabled drivers build config 00:02:42.228 crypto/octeontx: not in enabled drivers build config 00:02:42.228 crypto/openssl: not in enabled drivers build config 00:02:42.228 crypto/scheduler: not in enabled drivers build config 00:02:42.228 crypto/uadk: not in enabled drivers build config 00:02:42.229 crypto/virtio: not in enabled drivers build config 00:02:42.229 compress/isal: not in enabled drivers build config 00:02:42.229 compress/mlx5: not in enabled drivers build config 00:02:42.229 compress/octeontx: not in enabled drivers build config 00:02:42.229 compress/zlib: not in enabled drivers build config 00:02:42.229 regex/mlx5: not in enabled drivers build config 00:02:42.229 regex/cn9k: not in enabled drivers build config 00:02:42.229 vdpa/ifc: not in enabled drivers build config 00:02:42.229 vdpa/mlx5: not in enabled drivers build config 00:02:42.229 vdpa/sfc: not in enabled drivers build config 00:02:42.229 event/cnxk: not in enabled drivers build config 00:02:42.229 event/dlb2: not in enabled drivers build config 00:02:42.229 event/dpaa: not in enabled drivers build config 00:02:42.229 event/dpaa2: not in enabled drivers build config 00:02:42.229 event/dsw: not in enabled drivers build config 00:02:42.229 event/opdl: not in enabled drivers build config 00:02:42.229 event/skeleton: not in enabled drivers build config 00:02:42.229 event/sw: not in enabled drivers build config 00:02:42.229 event/octeontx: not in enabled drivers build config 00:02:42.229 baseband/acc: not in enabled drivers build config 00:02:42.229 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:42.229 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:42.229 baseband/la12xx: not in enabled drivers build config 00:02:42.229 baseband/null: not in enabled drivers build config 00:02:42.229 baseband/turbo_sw: not in enabled drivers build config 00:02:42.229 gpu/cuda: not in enabled drivers build config 00:02:42.229 00:02:42.229 00:02:42.229 Build targets in project: 309 00:02:42.229 00:02:42.229 DPDK 22.11.4 00:02:42.229 00:02:42.229 User defined options 00:02:42.229 libdir : lib 00:02:42.229 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:42.229 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:42.229 c_link_args : 00:02:42.229 enable_docs : false 00:02:42.229 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:42.229 enable_kmods : false 00:02:42.229 machine : native 00:02:42.229 tests : false 00:02:42.229 00:02:42.229 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.229 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:42.491 05:55:07 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:42.491 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:42.491 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:42.491 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:42.491 [3/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:42.491 [4/738] Generating lib/rte_telemetry_def with a custom command 00:02:42.491 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.491 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.491 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.491 [8/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.752 [9/738] Linking static target lib/librte_kvargs.a 00:02:42.752 [10/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.752 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.752 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.752 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.752 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.752 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.752 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.752 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.752 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.752 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.752 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.752 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.752 [22/738] Linking target lib/librte_kvargs.so.23.0 00:02:42.752 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:42.752 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.015 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.015 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.015 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.015 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.015 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.015 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.015 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.015 [32/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:43.015 [33/738] Linking static target lib/librte_telemetry.a 00:02:43.015 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:43.015 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.015 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.015 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.276 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.276 [39/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:43.276 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.276 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:43.276 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.276 [43/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.276 [44/738] Linking target lib/librte_telemetry.so.23.0 00:02:43.276 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:43.276 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:43.276 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:43.538 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:43.538 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:43.538 [50/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:43.538 [51/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.538 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.538 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:43.538 [54/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.538 [55/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:43.538 [56/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:43.538 [57/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.538 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.538 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.538 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.538 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.538 [62/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.538 [63/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.538 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.538 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:43.800 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:43.800 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.800 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.800 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.800 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.800 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.800 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.800 [73/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:43.800 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.800 [75/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.800 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.800 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.800 [78/738] Generating lib/rte_eal_def with a custom command 00:02:43.800 [79/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.800 [80/738] Generating lib/rte_eal_mingw with a custom command 00:02:43.800 [81/738] Generating lib/rte_ring_def with a custom command 00:02:43.800 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:43.800 [83/738] Generating lib/rte_rcu_def with a custom command 00:02:43.800 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:44.061 [85/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.061 [86/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.061 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.061 [88/738] Linking static target lib/librte_ring.a 00:02:44.061 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.061 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:44.062 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:02:44.062 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.062 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.322 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.322 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.322 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:44.322 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:44.322 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.322 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.322 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.322 [101/738] Linking static target lib/librte_eal.a 00:02:44.583 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.583 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.583 [104/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.583 [105/738] Linking static target lib/librte_rcu.a 00:02:44.583 [106/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.583 [107/738] Linking static target lib/librte_mempool.a 00:02:44.845 [108/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.845 [109/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.845 [110/738] Generating lib/rte_net_def with a custom command 00:02:44.845 [111/738] Generating lib/rte_net_mingw with a custom command 00:02:44.845 [112/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.845 [113/738] Generating lib/rte_meter_def with a custom command 00:02:44.845 [114/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.845 [115/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.845 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.845 [117/738] Generating lib/rte_meter_mingw with a custom command 00:02:44.845 [118/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.845 [119/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.845 [120/738] Linking static target lib/librte_meter.a 00:02:45.106 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.106 [122/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:45.106 [123/738] Linking static target lib/librte_net.a 00:02:45.367 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.367 [125/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:45.367 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.367 [127/738] Linking static target lib/librte_mbuf.a 00:02:45.367 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.367 [129/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.367 [130/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.367 [131/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.367 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.629 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.890 [134/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.890 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.890 [136/738] Generating lib/rte_ethdev_def with a custom command 00:02:45.890 [137/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:45.890 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.890 [139/738] Generating lib/rte_pci_def with a custom command 00:02:45.890 [140/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.890 [141/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.890 [142/738] Generating lib/rte_pci_mingw with a custom command 00:02:45.890 [143/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.890 [144/738] Linking static target lib/librte_pci.a 00:02:45.890 [145/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.890 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.151 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.151 [148/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.152 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.152 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.152 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.152 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.152 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.152 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.152 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.152 [156/738] Generating lib/rte_cmdline_def with a custom command 00:02:46.152 [157/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.152 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:46.152 [159/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.413 [160/738] Generating lib/rte_metrics_def with a custom command 00:02:46.413 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:02:46.413 [162/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.413 [163/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.413 [164/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:46.413 [165/738] Generating lib/rte_hash_def with a custom command 00:02:46.413 [166/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.413 [167/738] Generating lib/rte_hash_mingw with a custom command 00:02:46.413 [168/738] Linking static target lib/librte_cmdline.a 00:02:46.413 [169/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.413 [170/738] Generating lib/rte_timer_def with a custom command 00:02:46.413 [171/738] Generating lib/rte_timer_mingw with a custom command 00:02:46.674 [172/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.674 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:46.674 [174/738] Linking static target lib/librte_metrics.a 00:02:46.674 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.674 [176/738] Linking static target lib/librte_timer.a 00:02:46.933 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.933 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:46.933 [179/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:46.933 [180/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.193 [181/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.193 [182/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.193 [183/738] Generating lib/rte_acl_def with a custom command 00:02:47.193 [184/738] Generating lib/rte_acl_mingw with a custom command 00:02:47.193 [185/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:47.193 [186/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:47.193 [187/738] Generating lib/rte_bbdev_def with a custom command 00:02:47.454 [188/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:47.454 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:47.454 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:47.454 [191/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.454 [192/738] Linking static target lib/librte_ethdev.a 00:02:47.715 [193/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:47.715 [194/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:47.715 [195/738] Linking static target lib/librte_bbdev.a 00:02:47.715 [196/738] Linking static target lib/librte_bitratestats.a 00:02:47.715 [197/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:47.715 [198/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:47.715 [199/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.976 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:47.976 [201/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.236 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:48.236 [203/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:48.236 [204/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:48.497 [205/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:48.497 [206/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:48.497 [207/738] Linking static target lib/librte_hash.a 00:02:48.757 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:48.757 [209/738] Generating lib/rte_bpf_def with a custom command 00:02:48.757 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:02:48.757 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:48.757 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:02:48.757 [213/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:48.757 [214/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:49.018 [215/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:49.018 [216/738] Linking static target lib/librte_cfgfile.a 00:02:49.018 [217/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:49.018 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:49.018 [219/738] Generating lib/rte_compressdev_def with a custom command 00:02:49.018 [220/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:49.018 [221/738] Linking static target lib/librte_bpf.a 00:02:49.018 [222/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:49.018 [223/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:49.018 [224/738] Linking static target lib/librte_acl.a 00:02:49.018 [225/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.278 [226/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.278 [227/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:49.278 [228/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:49.278 [229/738] Generating lib/rte_cryptodev_def with a custom command 00:02:49.278 [230/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.278 [231/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:49.278 [232/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.278 [233/738] Generating lib/rte_distributor_def with a custom command 00:02:49.278 [234/738] Generating lib/rte_distributor_mingw with a custom command 00:02:49.278 [235/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:49.278 [236/738] Linking static target lib/librte_compressdev.a 00:02:49.538 [237/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:49.538 [238/738] Generating lib/rte_efd_def with a custom command 00:02:49.538 [239/738] Generating lib/rte_efd_mingw with a custom command 00:02:49.538 [240/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:49.538 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:49.538 [242/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.799 [243/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.799 [244/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:49.799 [245/738] Linking target lib/librte_eal.so.23.0 00:02:49.799 [246/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:49.799 [247/738] Linking static target lib/librte_distributor.a 00:02:49.799 [248/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:49.799 [249/738] Linking target lib/librte_ring.so.23.0 00:02:50.059 [250/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:50.059 [251/738] Linking target lib/librte_meter.so.23.0 00:02:50.059 [252/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.059 [253/738] Linking target lib/librte_pci.so.23.0 00:02:50.059 [254/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.059 [255/738] Linking target lib/librte_timer.so.23.0 00:02:50.059 [256/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:50.059 [257/738] Linking target lib/librte_rcu.so.23.0 00:02:50.059 [258/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:50.059 [259/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:50.059 [260/738] Linking target lib/librte_mempool.so.23.0 00:02:50.059 [261/738] Linking target lib/librte_acl.so.23.0 00:02:50.059 [262/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:50.059 [263/738] Linking target lib/librte_cfgfile.so.23.0 00:02:50.059 [264/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:50.059 [265/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:50.319 [266/738] Linking target lib/librte_mbuf.so.23.0 00:02:50.319 [267/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:50.319 [268/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:50.319 [269/738] Linking static target lib/librte_efd.a 00:02:50.319 [270/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:50.319 [271/738] Linking target lib/librte_net.so.23.0 00:02:50.319 [272/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:50.319 [273/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:50.319 [274/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.319 [275/738] Linking target lib/librte_hash.so.23.0 00:02:50.319 [276/738] Linking target lib/librte_cmdline.so.23.0 00:02:50.580 [277/738] Linking target lib/librte_bbdev.so.23.0 00:02:50.580 [278/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:50.580 [279/738] Linking target lib/librte_compressdev.so.23.0 00:02:50.580 [280/738] Linking target lib/librte_distributor.so.23.0 00:02:50.580 [281/738] Generating lib/rte_eventdev_def with a custom command 00:02:50.580 [282/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:50.580 [283/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:50.580 [284/738] Generating lib/rte_gpudev_def with a custom command 00:02:50.580 [285/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:50.580 [286/738] Linking target lib/librte_efd.so.23.0 00:02:50.580 [287/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:50.841 [288/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.841 [289/738] Linking target lib/librte_ethdev.so.23.0 00:02:50.841 [290/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:50.841 [291/738] Linking target lib/librte_metrics.so.23.0 00:02:50.841 [292/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:50.841 [293/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:50.841 [294/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:50.841 [295/738] Linking static target lib/librte_gpudev.a 00:02:50.841 [296/738] Linking target lib/librte_bpf.so.23.0 00:02:51.101 [297/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:51.101 [298/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:51.101 [299/738] Linking target lib/librte_bitratestats.so.23.0 00:02:51.101 [300/738] Generating lib/rte_gro_def with a custom command 00:02:51.101 [301/738] Generating lib/rte_gro_mingw with a custom command 00:02:51.101 [302/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:51.101 [303/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.101 [304/738] Linking static target lib/librte_cryptodev.a 00:02:51.101 [305/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:51.101 [306/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:51.361 [307/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:51.361 [308/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:51.361 [309/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:51.361 [310/738] Generating lib/rte_gso_def with a custom command 00:02:51.361 [311/738] Generating lib/rte_gso_mingw with a custom command 00:02:51.361 [312/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:51.361 [313/738] Linking static target lib/librte_gro.a 00:02:51.361 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:51.361 [315/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:51.361 [316/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:51.622 [317/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.622 [318/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.622 [319/738] Linking target lib/librte_gpudev.so.23.0 00:02:51.622 [320/738] Linking target lib/librte_gro.so.23.0 00:02:51.622 [321/738] Generating lib/rte_ip_frag_def with a custom command 00:02:51.622 [322/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:51.622 [323/738] Linking static target lib/librte_eventdev.a 00:02:51.622 [324/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:51.622 [325/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:51.622 [326/738] Linking static target lib/librte_gso.a 00:02:51.622 [327/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:51.622 [328/738] Linking static target lib/librte_jobstats.a 00:02:51.622 [329/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.622 [330/738] Generating lib/rte_jobstats_def with a custom command 00:02:51.882 [331/738] Linking target lib/librte_gso.so.23.0 00:02:51.882 [332/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:51.882 [333/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:51.882 [334/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:51.882 [335/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:51.882 [336/738] Generating lib/rte_latencystats_def with a custom command 00:02:51.882 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:51.882 [338/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:51.882 [339/738] Generating lib/rte_lpm_def with a custom command 00:02:51.882 [340/738] Generating lib/rte_lpm_mingw with a custom command 00:02:51.882 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:51.882 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:51.882 [343/738] Linking static target lib/librte_ip_frag.a 00:02:51.882 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.143 [345/738] Linking target lib/librte_jobstats.so.23.0 00:02:52.143 [346/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.143 [347/738] Linking target lib/librte_ip_frag.so.23.0 00:02:52.143 [348/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:52.143 [349/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:52.143 [350/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:52.143 [351/738] Linking static target lib/librte_latencystats.a 00:02:52.143 [352/738] Generating lib/rte_member_def with a custom command 00:02:52.143 [353/738] Generating lib/rte_member_mingw with a custom command 00:02:52.404 [354/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:52.404 [355/738] Generating lib/rte_pcapng_def with a custom command 00:02:52.404 [356/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:52.404 [357/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.404 [358/738] Linking target lib/librte_latencystats.so.23.0 00:02:52.404 [359/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.404 [360/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.664 [361/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:52.664 [362/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.664 [363/738] Linking target lib/librte_cryptodev.so.23.0 00:02:52.664 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.664 [365/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:52.664 [366/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:52.664 [367/738] Linking static target lib/librte_lpm.a 00:02:52.664 [368/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:52.664 [369/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:52.664 [370/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.664 [371/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:52.664 [372/738] Linking static target lib/librte_pcapng.a 00:02:52.933 [373/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.933 [374/738] Generating lib/rte_power_def with a custom command 00:02:52.933 [375/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:52.934 [376/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.934 [377/738] Generating lib/rte_power_mingw with a custom command 00:02:52.934 [378/738] Generating lib/rte_rawdev_def with a custom command 00:02:52.934 [379/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:52.934 [380/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.934 [381/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.934 [382/738] Generating lib/rte_regexdev_def with a custom command 00:02:52.934 [383/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.934 [384/738] Linking target lib/librte_eventdev.so.23.0 00:02:52.934 [385/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:52.934 [386/738] Linking target lib/librte_lpm.so.23.0 00:02:52.934 [387/738] Linking target lib/librte_pcapng.so.23.0 00:02:53.195 [388/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:53.195 [389/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:53.195 [390/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:53.195 [391/738] Generating lib/rte_dmadev_def with a custom command 00:02:53.195 [392/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:53.195 [393/738] Generating lib/rte_rib_def with a custom command 00:02:53.195 [394/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:53.195 [395/738] Generating lib/rte_rib_mingw with a custom command 00:02:53.195 [396/738] Generating lib/rte_reorder_def with a custom command 00:02:53.195 [397/738] Generating lib/rte_reorder_mingw with a custom command 00:02:53.195 [398/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.195 [399/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:53.195 [400/738] Linking static target lib/librte_rawdev.a 00:02:53.195 [401/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.195 [402/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:53.195 [403/738] Linking static target lib/librte_power.a 00:02:53.195 [404/738] Linking static target lib/librte_dmadev.a 00:02:53.195 [405/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:53.195 [406/738] Linking static target lib/librte_regexdev.a 00:02:53.195 [407/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:53.455 [408/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:53.455 [409/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:53.455 [410/738] Generating lib/rte_sched_def with a custom command 00:02:53.455 [411/738] Generating lib/rte_sched_mingw with a custom command 00:02:53.455 [412/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:53.455 [413/738] Linking static target lib/librte_member.a 00:02:53.455 [414/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.715 [415/738] Linking target lib/librte_rawdev.so.23.0 00:02:53.715 [416/738] Generating lib/rte_security_def with a custom command 00:02:53.715 [417/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.715 [418/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:53.715 [419/738] Generating lib/rte_security_mingw with a custom command 00:02:53.715 [420/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.715 [421/738] Linking target lib/librte_dmadev.so.23.0 00:02:53.715 [422/738] Linking static target lib/librte_reorder.a 00:02:53.715 [423/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:53.715 [424/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:53.715 [425/738] Generating lib/rte_stack_def with a custom command 00:02:53.715 [426/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:53.715 [427/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.715 [428/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:53.715 [429/738] Generating lib/rte_stack_mingw with a custom command 00:02:53.715 [430/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.715 [431/738] Linking static target lib/librte_stack.a 00:02:53.715 [432/738] Linking target lib/librte_regexdev.so.23.0 00:02:53.715 [433/738] Linking target lib/librte_member.so.23.0 00:02:53.976 [434/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.976 [435/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:53.976 [436/738] Linking target lib/librte_reorder.so.23.0 00:02:53.976 [437/738] Linking static target lib/librte_rib.a 00:02:53.976 [438/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.976 [439/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.976 [440/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.976 [441/738] Linking target lib/librte_power.so.23.0 00:02:53.976 [442/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.976 [443/738] Linking target lib/librte_stack.so.23.0 00:02:53.976 [444/738] Linking static target lib/librte_security.a 00:02:54.237 [445/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.237 [446/738] Generating lib/rte_vhost_def with a custom command 00:02:54.237 [447/738] Generating lib/rte_vhost_mingw with a custom command 00:02:54.237 [448/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.237 [449/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.237 [450/738] Linking target lib/librte_rib.so.23.0 00:02:54.237 [451/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.499 [452/738] Linking target lib/librte_security.so.23.0 00:02:54.499 [453/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:54.499 [454/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.499 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:54.760 [456/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:54.760 [457/738] Linking static target lib/librte_sched.a 00:02:54.760 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:54.760 [459/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:54.760 [460/738] Generating lib/rte_ipsec_def with a custom command 00:02:54.760 [461/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:55.021 [462/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.021 [463/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.021 [464/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:55.021 [465/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.021 [466/738] Linking target lib/librte_sched.so.23.0 00:02:55.283 [467/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:55.283 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:55.283 [469/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:55.283 [470/738] Generating lib/rte_fib_def with a custom command 00:02:55.283 [471/738] Generating lib/rte_fib_mingw with a custom command 00:02:55.283 [472/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:55.544 [473/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:55.544 [474/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:55.544 [475/738] Linking static target lib/librte_ipsec.a 00:02:55.544 [476/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:55.544 [477/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:55.544 [478/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:55.805 [479/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:55.805 [480/738] Linking static target lib/librte_fib.a 00:02:55.805 [481/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.805 [482/738] Linking target lib/librte_ipsec.so.23.0 00:02:55.805 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:55.805 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:55.805 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:55.805 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:56.066 [487/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.066 [488/738] Linking target lib/librte_fib.so.23.0 00:02:56.326 [489/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:56.326 [490/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:56.326 [491/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:56.326 [492/738] Generating lib/rte_port_def with a custom command 00:02:56.326 [493/738] Generating lib/rte_port_mingw with a custom command 00:02:56.326 [494/738] Generating lib/rte_pdump_def with a custom command 00:02:56.326 [495/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:56.326 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:02:56.586 [497/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:56.586 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:56.586 [499/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:56.586 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:56.847 [501/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:56.847 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:56.847 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:57.134 [504/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:57.134 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:57.134 [506/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:57.134 [507/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:57.134 [508/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:57.134 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:57.134 [510/738] Linking static target lib/librte_port.a 00:02:57.134 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:57.134 [512/738] Linking static target lib/librte_pdump.a 00:02:57.412 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.412 [514/738] Linking target lib/librte_pdump.so.23.0 00:02:57.412 [515/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:57.412 [516/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.673 [517/738] Linking target lib/librte_port.so.23.0 00:02:57.673 [518/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:57.673 [519/738] Generating lib/rte_table_def with a custom command 00:02:57.673 [520/738] Generating lib/rte_table_mingw with a custom command 00:02:57.673 [521/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:57.673 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:57.673 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:57.673 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:57.673 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:57.673 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:57.933 [527/738] Generating lib/rte_pipeline_def with a custom command 00:02:57.933 [528/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:57.933 [529/738] Linking static target lib/librte_table.a 00:02:57.934 [530/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:58.195 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:58.195 [532/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:58.195 [533/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:58.195 [534/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.195 [535/738] Linking target lib/librte_table.so.23.0 00:02:58.456 [536/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.456 [537/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:58.456 [538/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:58.456 [539/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:58.456 [540/738] Generating lib/rte_graph_def with a custom command 00:02:58.456 [541/738] Generating lib/rte_graph_mingw with a custom command 00:02:58.456 [542/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:58.716 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:58.716 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:58.716 [545/738] Linking static target lib/librte_graph.a 00:02:58.716 [546/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:58.716 [547/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:58.977 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:58.977 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:58.977 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:58.977 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:58.977 [552/738] Generating lib/rte_node_def with a custom command 00:02:58.977 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:58.977 [554/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:59.239 [555/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.239 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:59.239 [557/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:59.239 [558/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.239 [559/738] Linking target lib/librte_graph.so.23.0 00:02:59.239 [560/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:59.239 [561/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:59.239 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:59.239 [563/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:59.239 [564/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:59.500 [565/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:59.500 [566/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:59.500 [567/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:59.500 [568/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:59.500 [569/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:59.500 [570/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:59.500 [571/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:59.500 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:59.500 [573/738] Linking static target lib/librte_node.a 00:02:59.500 [574/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:59.500 [575/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:59.500 [576/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:59.500 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:59.500 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:59.761 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.761 [580/738] Linking target lib/librte_node.so.23.0 00:02:59.761 [581/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:59.761 [582/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:59.761 [583/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:59.761 [584/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:59.761 [585/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:59.761 [586/738] Linking static target drivers/librte_bus_pci.a 00:02:59.761 [587/738] Linking static target drivers/librte_bus_vdev.a 00:03:00.022 [588/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.022 [589/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.022 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:00.022 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:00.022 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:00.022 [593/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.022 [594/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:00.022 [595/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:00.022 [596/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:00.022 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:00.283 [598/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.283 [599/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.283 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:00.283 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.283 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.283 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:00.283 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.283 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:00.545 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:00.545 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:00.806 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:00.806 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:01.066 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:01.328 [611/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:01.328 [612/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:01.328 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:01.328 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:01.587 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:01.587 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:01.587 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:01.587 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:01.845 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:02.105 [620/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:02.105 [621/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:02.363 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:02.622 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:02.622 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:02.622 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:02.622 [626/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:02.622 [627/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:02.622 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:02.622 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:02.881 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:02.881 [631/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:03.182 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:03.182 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:03.182 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:03.463 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:03.463 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:03.463 [637/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:03.463 [638/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:03.463 [639/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:03.463 [640/738] Linking static target drivers/librte_net_i40e.a 00:03:03.720 [641/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:03.720 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:03.720 [643/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:03.720 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:03.720 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:03.978 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:03.978 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:03.978 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:03.978 [649/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.978 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:04.236 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:04.236 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:04.236 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:04.236 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:04.236 [655/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:04.236 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:04.493 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:04.494 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:04.494 [659/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:04.494 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:04.752 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:04.752 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:05.010 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:05.010 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:05.269 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:05.269 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:05.269 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:05.527 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:05.527 [669/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:05.527 [670/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:05.527 [671/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:05.785 [672/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:05.785 [673/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:05.785 [674/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:05.785 [675/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:05.785 [676/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:06.044 [677/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:06.044 [678/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:06.044 [679/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.044 [680/738] Linking static target lib/librte_vhost.a 00:03:06.304 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:06.304 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:06.304 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:06.304 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:06.304 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:06.304 [686/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:06.562 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:06.562 [688/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:06.562 [689/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:06.562 [690/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:06.821 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:06.821 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:07.079 [693/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.079 [694/738] Linking target lib/librte_vhost.so.23.0 00:03:07.079 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:07.079 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:07.079 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:07.337 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:07.338 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:07.595 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:07.595 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:07.595 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:07.853 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:07.853 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:07.853 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.112 [706/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:08.112 [707/738] Linking static target lib/librte_pipeline.a 00:03:08.112 [708/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:08.112 [709/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:08.370 [710/738] Linking target app/dpdk-dumpcap 00:03:08.370 [711/738] Linking target app/dpdk-pdump 00:03:08.370 [712/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:08.370 [713/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:08.370 [714/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:08.629 [715/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:08.629 [716/738] Linking target app/dpdk-proc-info 00:03:08.629 [717/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:08.629 [718/738] Linking target app/dpdk-test-bbdev 00:03:08.629 [719/738] Linking target app/dpdk-test-cmdline 00:03:08.629 [720/738] Linking target app/dpdk-test-compress-perf 00:03:08.629 [721/738] Linking target app/dpdk-test-acl 00:03:08.629 [722/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:08.887 [723/738] Linking target app/dpdk-test-crypto-perf 00:03:08.887 [724/738] Linking target app/dpdk-test-eventdev 00:03:08.887 [725/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:08.887 [726/738] Linking target app/dpdk-test-fib 00:03:08.887 [727/738] Linking target app/dpdk-test-flow-perf 00:03:08.887 [728/738] Linking target app/dpdk-test-gpudev 00:03:08.887 [729/738] Linking target app/dpdk-test-regex 00:03:09.146 [730/738] Linking target app/dpdk-test-pipeline 00:03:09.146 [731/738] Linking target app/dpdk-testpmd 00:03:09.146 [732/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:09.403 [733/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:09.403 [734/738] Linking target app/dpdk-test-sad 00:03:09.660 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:09.916 [736/738] Linking target app/dpdk-test-security-perf 00:03:10.847 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.847 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:10.847 05:55:36 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:10.847 05:55:36 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:10.847 05:55:36 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.847 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.847 [0/1] Installing files. 00:03:11.107 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.111 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.111 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.111 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.111 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.111 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.112 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.371 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.372 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.372 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.372 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.372 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.372 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.374 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.375 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.376 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:11.376 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:11.376 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:11.376 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:11.376 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:11.376 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:11.376 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:11.376 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:11.376 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:11.376 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:11.376 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:11.376 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:11.376 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:11.376 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:11.376 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:11.376 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:11.376 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:11.376 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:11.376 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:11.376 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:11.376 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:11.376 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:11.376 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:11.376 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:11.376 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:11.376 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:11.376 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:11.376 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:11.376 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:11.376 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:11.376 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:11.376 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:11.376 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:11.376 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:11.376 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:11.376 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:11.376 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:11.376 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:11.376 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:11.376 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:11.376 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:11.376 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:11.376 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:11.376 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:11.376 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:11.376 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:11.376 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:11.376 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:11.376 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:11.376 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:11.376 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:11.376 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:11.376 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:11.376 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:11.376 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:11.376 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:11.376 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:11.376 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:11.376 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:11.376 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:11.376 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:11.376 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:11.376 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:11.376 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:11.376 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:11.376 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:11.376 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:11.376 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:11.376 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:11.376 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:11.376 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:11.376 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:11.376 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:11.376 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:11.376 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:11.376 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:11.376 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:11.376 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:11.376 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:11.376 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:11.376 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:11.376 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:11.376 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:11.376 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:11.376 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:11.377 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:11.377 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:11.377 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:11.377 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:11.377 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:11.377 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:11.377 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:11.377 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:11.377 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:11.377 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:11.377 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:11.377 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:11.377 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:11.377 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:11.377 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:11.377 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:11.377 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:11.377 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:11.377 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:11.377 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:11.377 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:11.377 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:11.377 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:11.377 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:11.377 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:11.377 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:11.377 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:11.377 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:11.377 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:11.377 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:11.377 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:11.377 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:11.377 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:11.377 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:11.377 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:11.377 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:11.377 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:11.377 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:11.377 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:11.377 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:11.377 05:55:36 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:11.377 05:55:36 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:11.377 00:03:11.377 real 0m35.036s 00:03:11.377 user 3m50.544s 00:03:11.377 sys 0m35.778s 00:03:11.377 05:55:36 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:11.377 ************************************ 00:03:11.377 END TEST build_native_dpdk 00:03:11.377 ************************************ 00:03:11.377 05:55:36 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:11.633 05:55:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.633 05:55:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.633 05:55:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.633 05:55:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.633 05:55:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.633 05:55:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.633 05:55:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.633 05:55:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:11.633 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:11.633 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.633 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:11.633 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:11.890 Using 'verbs' RDMA provider 00:03:22.790 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:32.759 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:32.759 Creating mk/config.mk...done. 00:03:32.759 Creating mk/cc.flags.mk...done. 00:03:32.759 Type 'make' to build. 00:03:32.759 05:55:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:32.759 05:55:57 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:32.759 05:55:57 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:32.759 05:55:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.759 ************************************ 00:03:32.759 START TEST make 00:03:32.759 ************************************ 00:03:32.759 05:55:57 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:32.759 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:32.759 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:32.759 meson setup builddir \ 00:03:32.759 -Dwith-libaio=enabled \ 00:03:32.759 -Dwith-liburing=enabled \ 00:03:32.759 -Dwith-libvfn=disabled \ 00:03:32.759 -Dwith-spdk=false && \ 00:03:32.759 meson compile -C builddir && \ 00:03:32.759 cd -) 00:03:32.759 make[1]: Nothing to be done for 'all'. 00:03:35.312 The Meson build system 00:03:35.312 Version: 1.5.0 00:03:35.312 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:35.312 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:35.312 Build type: native build 00:03:35.312 Project name: xnvme 00:03:35.312 Project version: 0.7.3 00:03:35.312 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:35.312 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:35.312 Host machine cpu family: x86_64 00:03:35.312 Host machine cpu: x86_64 00:03:35.312 Message: host_machine.system: linux 00:03:35.312 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:35.312 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:35.313 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:35.313 Run-time dependency threads found: YES 00:03:35.313 Has header "setupapi.h" : NO 00:03:35.313 Has header "linux/blkzoned.h" : YES 00:03:35.313 Has header "linux/blkzoned.h" : YES (cached) 00:03:35.313 Has header "libaio.h" : YES 00:03:35.313 Library aio found: YES 00:03:35.313 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:35.313 Run-time dependency liburing found: YES 2.2 00:03:35.313 Dependency libvfn skipped: feature with-libvfn disabled 00:03:35.313 Run-time dependency appleframeworks found: NO (tried framework) 00:03:35.313 Run-time dependency appleframeworks found: NO (tried framework) 00:03:35.313 Configuring xnvme_config.h using configuration 00:03:35.313 Configuring xnvme.spec using configuration 00:03:35.313 Run-time dependency bash-completion found: YES 2.11 00:03:35.313 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:35.313 Program cp found: YES (/usr/bin/cp) 00:03:35.313 Has header "winsock2.h" : NO 00:03:35.313 Has header "dbghelp.h" : NO 00:03:35.313 Library rpcrt4 found: NO 00:03:35.313 Library rt found: YES 00:03:35.313 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:35.313 Found CMake: /usr/bin/cmake (3.27.7) 00:03:35.313 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:35.313 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:35.313 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:35.313 Build targets in project: 32 00:03:35.313 00:03:35.313 xnvme 0.7.3 00:03:35.313 00:03:35.313 User defined options 00:03:35.313 with-libaio : enabled 00:03:35.313 with-liburing: enabled 00:03:35.313 with-libvfn : disabled 00:03:35.313 with-spdk : false 00:03:35.313 00:03:35.313 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.313 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:35.313 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:35.313 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:35.313 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:35.313 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:35.313 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:35.572 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:35.572 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:35.572 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:35.572 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:35.572 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:35.572 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:35.572 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:35.572 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:35.572 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:35.572 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:35.572 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:35.572 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:35.572 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:35.572 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:35.572 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:35.572 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:35.572 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:35.572 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:35.572 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:35.572 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:35.572 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:35.572 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:35.572 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:35.572 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:35.572 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:35.572 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:35.832 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:35.832 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:35.832 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:35.832 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:35.832 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:35.832 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:35.832 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:35.832 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:35.832 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:35.832 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:35.832 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:35.832 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:35.832 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:35.832 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:35.832 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:35.832 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:35.832 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:35.832 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:35.832 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:35.832 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:35.832 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:35.832 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:35.832 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:35.833 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:35.833 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:35.833 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:35.833 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:35.833 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:35.833 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:35.833 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:35.833 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:35.833 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:35.833 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:35.833 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:36.093 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:36.093 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:36.093 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:36.093 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:36.093 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:36.093 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:36.093 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:36.093 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:36.093 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:36.093 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:36.093 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:36.093 [77/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:36.093 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:36.093 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:36.093 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:36.093 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:36.093 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:36.093 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:36.093 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:36.093 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:36.093 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:36.354 [87/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:36.354 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:36.354 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:36.354 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:36.354 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:36.354 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:36.354 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:36.354 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:36.354 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:36.354 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:36.354 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:36.354 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:36.354 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:36.354 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:36.354 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:36.354 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:36.354 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:36.354 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:36.354 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:36.354 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:36.354 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:36.354 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:36.354 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:36.354 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:36.354 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:36.354 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:36.354 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:36.354 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:36.354 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:36.354 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:36.354 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:36.354 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:36.354 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:36.354 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:36.354 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:36.613 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:36.613 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:36.613 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:36.613 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:36.613 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:36.613 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:36.613 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:36.613 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:36.613 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:36.613 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:36.613 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:36.613 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:36.613 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:36.613 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:36.613 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:36.613 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:36.613 [138/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:36.613 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:36.613 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:36.613 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:36.875 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:36.875 [143/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:36.875 [144/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:36.875 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:36.875 [146/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:36.875 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:36.875 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:36.875 [149/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:36.875 [150/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:36.875 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:36.875 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:36.875 [153/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:36.875 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:36.875 [155/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:36.875 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:36.875 [157/203] Linking target lib/libxnvme.so 00:03:37.133 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:37.133 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:37.133 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:37.133 [161/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:37.133 [162/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:37.133 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:37.133 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:37.133 [165/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:37.133 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:37.133 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:37.133 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:37.133 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:37.133 [170/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:37.393 [171/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:37.393 [172/203] Linking static target lib/libxnvme.a 00:03:37.393 [173/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:37.393 [174/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:37.393 [175/203] Linking target tests/xnvme_tests_cli 00:03:37.393 [176/203] Linking target tests/xnvme_tests_buf 00:03:37.393 [177/203] Linking target tests/xnvme_tests_lblk 00:03:37.393 [178/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:37.393 [179/203] Linking target tests/xnvme_tests_ioworker 00:03:37.393 [180/203] Linking target tests/xnvme_tests_znd_append 00:03:37.393 [181/203] Linking target tests/xnvme_tests_znd_state 00:03:37.393 [182/203] Linking target tests/xnvme_tests_enum 00:03:37.393 [183/203] Linking target tests/xnvme_tests_async_intf 00:03:37.393 [184/203] Linking target tests/xnvme_tests_scc 00:03:37.393 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:03:37.393 [186/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:37.393 [187/203] Linking target tests/xnvme_tests_kvs 00:03:37.393 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:37.393 [189/203] Linking target tests/xnvme_tests_map 00:03:37.393 [190/203] Linking target tools/lblk 00:03:37.393 [191/203] Linking target tools/xdd 00:03:37.393 [192/203] Linking target tools/kvs 00:03:37.393 [193/203] Linking target examples/xnvme_dev 00:03:37.393 [194/203] Linking target tools/xnvme 00:03:37.393 [195/203] Linking target tools/zoned 00:03:37.393 [196/203] Linking target examples/xnvme_hello 00:03:37.393 [197/203] Linking target tools/xnvme_file 00:03:37.393 [198/203] Linking target examples/xnvme_io_async 00:03:37.393 [199/203] Linking target examples/xnvme_single_async 00:03:37.393 [200/203] Linking target examples/zoned_io_async 00:03:37.393 [201/203] Linking target examples/zoned_io_sync 00:03:37.393 [202/203] Linking target examples/xnvme_enum 00:03:37.393 [203/203] Linking target examples/xnvme_single_sync 00:03:37.393 INFO: autodetecting backend as ninja 00:03:37.393 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:37.393 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:09.481 CC lib/log/log.o 00:04:09.481 CC lib/log/log_deprecated.o 00:04:09.481 CC lib/log/log_flags.o 00:04:09.481 CC lib/ut/ut.o 00:04:09.481 CC lib/ut_mock/mock.o 00:04:09.481 LIB libspdk_ut.a 00:04:09.481 LIB libspdk_log.a 00:04:09.481 SO libspdk_ut.so.2.0 00:04:09.481 SO libspdk_log.so.7.0 00:04:09.481 LIB libspdk_ut_mock.a 00:04:09.481 SYMLINK libspdk_ut.so 00:04:09.481 SO libspdk_ut_mock.so.6.0 00:04:09.481 SYMLINK libspdk_log.so 00:04:09.481 SYMLINK libspdk_ut_mock.so 00:04:09.481 CC lib/util/base64.o 00:04:09.481 CC lib/util/bit_array.o 00:04:09.481 CC lib/util/cpuset.o 00:04:09.481 CC lib/util/crc16.o 00:04:09.481 CC lib/util/crc32.o 00:04:09.481 CC lib/util/crc32c.o 00:04:09.481 CC lib/dma/dma.o 00:04:09.481 CXX lib/trace_parser/trace.o 00:04:09.481 CC lib/ioat/ioat.o 00:04:09.481 CC lib/vfio_user/host/vfio_user_pci.o 00:04:09.481 CC lib/util/crc32_ieee.o 00:04:09.481 CC lib/util/crc64.o 00:04:09.481 CC lib/util/dif.o 00:04:09.481 CC lib/util/fd.o 00:04:09.481 CC lib/util/fd_group.o 00:04:09.481 LIB libspdk_dma.a 00:04:09.481 SO libspdk_dma.so.5.0 00:04:09.481 CC lib/util/file.o 00:04:09.481 CC lib/util/hexlify.o 00:04:09.481 CC lib/util/iov.o 00:04:09.481 CC lib/util/math.o 00:04:09.481 LIB libspdk_ioat.a 00:04:09.481 SYMLINK libspdk_dma.so 00:04:09.481 SO libspdk_ioat.so.7.0 00:04:09.481 CC lib/vfio_user/host/vfio_user.o 00:04:09.481 SYMLINK libspdk_ioat.so 00:04:09.481 CC lib/util/net.o 00:04:09.481 CC lib/util/pipe.o 00:04:09.481 CC lib/util/strerror_tls.o 00:04:09.481 CC lib/util/string.o 00:04:09.481 CC lib/util/uuid.o 00:04:09.481 CC lib/util/xor.o 00:04:09.481 CC lib/util/zipf.o 00:04:09.481 CC lib/util/md5.o 00:04:09.481 LIB libspdk_vfio_user.a 00:04:09.481 SO libspdk_vfio_user.so.5.0 00:04:09.481 SYMLINK libspdk_vfio_user.so 00:04:09.481 LIB libspdk_util.a 00:04:09.481 SO libspdk_util.so.10.0 00:04:09.481 LIB libspdk_trace_parser.a 00:04:09.481 SYMLINK libspdk_util.so 00:04:09.481 SO libspdk_trace_parser.so.6.0 00:04:09.481 SYMLINK libspdk_trace_parser.so 00:04:09.481 CC lib/conf/conf.o 00:04:09.481 CC lib/vmd/vmd.o 00:04:09.481 CC lib/vmd/led.o 00:04:09.481 CC lib/json/json_parse.o 00:04:09.481 CC lib/json/json_util.o 00:04:09.481 CC lib/json/json_write.o 00:04:09.481 CC lib/rdma_provider/common.o 00:04:09.481 CC lib/env_dpdk/env.o 00:04:09.481 CC lib/rdma_utils/rdma_utils.o 00:04:09.481 CC lib/idxd/idxd.o 00:04:09.481 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:09.481 CC lib/env_dpdk/memory.o 00:04:09.481 LIB libspdk_conf.a 00:04:09.481 CC lib/env_dpdk/pci.o 00:04:09.481 CC lib/env_dpdk/init.o 00:04:09.481 SO libspdk_conf.so.6.0 00:04:09.481 LIB libspdk_json.a 00:04:09.481 SO libspdk_json.so.6.0 00:04:09.481 LIB libspdk_rdma_utils.a 00:04:09.481 LIB libspdk_rdma_provider.a 00:04:09.481 SYMLINK libspdk_conf.so 00:04:09.481 SO libspdk_rdma_utils.so.1.0 00:04:09.481 SO libspdk_rdma_provider.so.6.0 00:04:09.481 CC lib/env_dpdk/threads.o 00:04:09.481 SYMLINK libspdk_json.so 00:04:09.481 CC lib/env_dpdk/pci_ioat.o 00:04:09.481 SYMLINK libspdk_rdma_utils.so 00:04:09.481 SYMLINK libspdk_rdma_provider.so 00:04:09.481 CC lib/env_dpdk/pci_virtio.o 00:04:09.481 CC lib/env_dpdk/pci_vmd.o 00:04:09.481 CC lib/env_dpdk/pci_idxd.o 00:04:09.481 CC lib/env_dpdk/pci_event.o 00:04:09.481 CC lib/jsonrpc/jsonrpc_server.o 00:04:09.481 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:09.481 CC lib/env_dpdk/sigbus_handler.o 00:04:09.481 CC lib/env_dpdk/pci_dpdk.o 00:04:09.481 CC lib/idxd/idxd_user.o 00:04:09.482 CC lib/idxd/idxd_kernel.o 00:04:09.482 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:09.482 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:09.482 CC lib/jsonrpc/jsonrpc_client.o 00:04:09.482 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:09.482 LIB libspdk_vmd.a 00:04:09.482 SO libspdk_vmd.so.6.0 00:04:09.482 LIB libspdk_idxd.a 00:04:09.482 SYMLINK libspdk_vmd.so 00:04:09.482 SO libspdk_idxd.so.12.1 00:04:09.482 LIB libspdk_jsonrpc.a 00:04:09.482 SO libspdk_jsonrpc.so.6.0 00:04:09.482 SYMLINK libspdk_idxd.so 00:04:09.482 SYMLINK libspdk_jsonrpc.so 00:04:09.740 CC lib/rpc/rpc.o 00:04:09.997 LIB libspdk_rpc.a 00:04:09.997 SO libspdk_rpc.so.6.0 00:04:09.997 SYMLINK libspdk_rpc.so 00:04:09.997 LIB libspdk_env_dpdk.a 00:04:09.997 SO libspdk_env_dpdk.so.15.0 00:04:09.997 CC lib/trace/trace_flags.o 00:04:09.997 CC lib/trace/trace.o 00:04:09.997 CC lib/trace/trace_rpc.o 00:04:09.997 CC lib/notify/notify.o 00:04:10.255 CC lib/notify/notify_rpc.o 00:04:10.255 CC lib/keyring/keyring.o 00:04:10.255 CC lib/keyring/keyring_rpc.o 00:04:10.255 SYMLINK libspdk_env_dpdk.so 00:04:10.255 LIB libspdk_notify.a 00:04:10.255 SO libspdk_notify.so.6.0 00:04:10.255 LIB libspdk_keyring.a 00:04:10.255 SYMLINK libspdk_notify.so 00:04:10.255 LIB libspdk_trace.a 00:04:10.255 SO libspdk_keyring.so.2.0 00:04:10.255 SO libspdk_trace.so.11.0 00:04:10.513 SYMLINK libspdk_keyring.so 00:04:10.513 SYMLINK libspdk_trace.so 00:04:10.513 CC lib/thread/thread.o 00:04:10.513 CC lib/thread/iobuf.o 00:04:10.513 CC lib/sock/sock_rpc.o 00:04:10.513 CC lib/sock/sock.o 00:04:11.079 LIB libspdk_sock.a 00:04:11.079 SO libspdk_sock.so.10.0 00:04:11.079 SYMLINK libspdk_sock.so 00:04:11.338 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:11.338 CC lib/nvme/nvme_fabric.o 00:04:11.338 CC lib/nvme/nvme_ctrlr.o 00:04:11.338 CC lib/nvme/nvme_ns.o 00:04:11.338 CC lib/nvme/nvme_pcie.o 00:04:11.338 CC lib/nvme/nvme_ns_cmd.o 00:04:11.338 CC lib/nvme/nvme.o 00:04:11.338 CC lib/nvme/nvme_qpair.o 00:04:11.338 CC lib/nvme/nvme_pcie_common.o 00:04:11.904 CC lib/nvme/nvme_quirks.o 00:04:11.904 CC lib/nvme/nvme_transport.o 00:04:11.904 CC lib/nvme/nvme_discovery.o 00:04:12.162 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:12.162 LIB libspdk_thread.a 00:04:12.162 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:12.162 CC lib/nvme/nvme_tcp.o 00:04:12.162 SO libspdk_thread.so.10.1 00:04:12.162 CC lib/nvme/nvme_opal.o 00:04:12.162 SYMLINK libspdk_thread.so 00:04:12.421 CC lib/nvme/nvme_io_msg.o 00:04:12.421 CC lib/accel/accel.o 00:04:12.421 CC lib/blob/blobstore.o 00:04:12.421 CC lib/blob/request.o 00:04:12.421 CC lib/blob/zeroes.o 00:04:12.679 CC lib/accel/accel_rpc.o 00:04:12.679 CC lib/blob/blob_bs_dev.o 00:04:12.679 CC lib/virtio/virtio.o 00:04:12.679 CC lib/init/json_config.o 00:04:12.938 CC lib/accel/accel_sw.o 00:04:12.938 CC lib/init/subsystem.o 00:04:12.938 CC lib/fsdev/fsdev.o 00:04:12.938 CC lib/init/subsystem_rpc.o 00:04:12.938 CC lib/fsdev/fsdev_io.o 00:04:12.938 CC lib/fsdev/fsdev_rpc.o 00:04:13.196 CC lib/virtio/virtio_vhost_user.o 00:04:13.196 CC lib/init/rpc.o 00:04:13.196 CC lib/virtio/virtio_vfio_user.o 00:04:13.196 CC lib/virtio/virtio_pci.o 00:04:13.196 LIB libspdk_init.a 00:04:13.455 CC lib/nvme/nvme_poll_group.o 00:04:13.455 SO libspdk_init.so.6.0 00:04:13.455 SYMLINK libspdk_init.so 00:04:13.455 CC lib/nvme/nvme_stubs.o 00:04:13.455 CC lib/nvme/nvme_zns.o 00:04:13.455 LIB libspdk_fsdev.a 00:04:13.455 LIB libspdk_virtio.a 00:04:13.455 SO libspdk_fsdev.so.1.0 00:04:13.455 SO libspdk_virtio.so.7.0 00:04:13.455 CC lib/nvme/nvme_auth.o 00:04:13.455 SYMLINK libspdk_fsdev.so 00:04:13.455 CC lib/nvme/nvme_cuse.o 00:04:13.455 SYMLINK libspdk_virtio.so 00:04:13.455 CC lib/event/app.o 00:04:13.727 LIB libspdk_accel.a 00:04:13.727 SO libspdk_accel.so.16.0 00:04:13.727 CC lib/event/reactor.o 00:04:13.727 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:13.727 SYMLINK libspdk_accel.so 00:04:13.727 CC lib/event/log_rpc.o 00:04:13.727 CC lib/event/app_rpc.o 00:04:14.002 CC lib/event/scheduler_static.o 00:04:14.002 CC lib/nvme/nvme_rdma.o 00:04:14.002 CC lib/bdev/bdev.o 00:04:14.002 CC lib/bdev/bdev_rpc.o 00:04:14.002 CC lib/bdev/part.o 00:04:14.002 CC lib/bdev/bdev_zone.o 00:04:14.002 LIB libspdk_event.a 00:04:14.002 SO libspdk_event.so.14.0 00:04:14.261 SYMLINK libspdk_event.so 00:04:14.261 CC lib/bdev/scsi_nvme.o 00:04:14.261 LIB libspdk_fuse_dispatcher.a 00:04:14.261 SO libspdk_fuse_dispatcher.so.1.0 00:04:14.519 SYMLINK libspdk_fuse_dispatcher.so 00:04:15.453 LIB libspdk_nvme.a 00:04:15.453 SO libspdk_nvme.so.14.0 00:04:15.453 LIB libspdk_blob.a 00:04:15.453 SO libspdk_blob.so.11.0 00:04:15.712 SYMLINK libspdk_blob.so 00:04:15.712 SYMLINK libspdk_nvme.so 00:04:15.712 CC lib/lvol/lvol.o 00:04:15.712 CC lib/blobfs/blobfs.o 00:04:15.712 CC lib/blobfs/tree.o 00:04:16.279 LIB libspdk_bdev.a 00:04:16.537 SO libspdk_bdev.so.16.0 00:04:16.537 SYMLINK libspdk_bdev.so 00:04:16.537 LIB libspdk_lvol.a 00:04:16.537 SO libspdk_lvol.so.10.0 00:04:16.537 SYMLINK libspdk_lvol.so 00:04:16.537 CC lib/nvmf/ctrlr.o 00:04:16.537 CC lib/nbd/nbd.o 00:04:16.537 CC lib/nbd/nbd_rpc.o 00:04:16.537 CC lib/nvmf/ctrlr_discovery.o 00:04:16.537 CC lib/ftl/ftl_core.o 00:04:16.537 CC lib/ftl/ftl_layout.o 00:04:16.537 CC lib/ftl/ftl_init.o 00:04:16.537 CC lib/ublk/ublk.o 00:04:16.537 CC lib/scsi/dev.o 00:04:16.796 LIB libspdk_blobfs.a 00:04:16.796 SO libspdk_blobfs.so.10.0 00:04:16.796 SYMLINK libspdk_blobfs.so 00:04:16.796 CC lib/scsi/lun.o 00:04:16.796 CC lib/scsi/port.o 00:04:16.796 CC lib/ftl/ftl_debug.o 00:04:16.796 CC lib/ftl/ftl_io.o 00:04:17.055 CC lib/ftl/ftl_sb.o 00:04:17.055 CC lib/ublk/ublk_rpc.o 00:04:17.055 CC lib/scsi/scsi.o 00:04:17.055 CC lib/scsi/scsi_bdev.o 00:04:17.055 CC lib/nvmf/ctrlr_bdev.o 00:04:17.055 LIB libspdk_nbd.a 00:04:17.055 CC lib/ftl/ftl_l2p.o 00:04:17.055 SO libspdk_nbd.so.7.0 00:04:17.055 CC lib/nvmf/subsystem.o 00:04:17.055 SYMLINK libspdk_nbd.so 00:04:17.055 CC lib/ftl/ftl_l2p_flat.o 00:04:17.055 CC lib/nvmf/nvmf.o 00:04:17.055 CC lib/ftl/ftl_nv_cache.o 00:04:17.055 CC lib/nvmf/nvmf_rpc.o 00:04:17.055 LIB libspdk_ublk.a 00:04:17.055 SO libspdk_ublk.so.3.0 00:04:17.314 SYMLINK libspdk_ublk.so 00:04:17.314 CC lib/ftl/ftl_band.o 00:04:17.314 CC lib/ftl/ftl_band_ops.o 00:04:17.314 CC lib/scsi/scsi_pr.o 00:04:17.314 CC lib/scsi/scsi_rpc.o 00:04:17.572 CC lib/nvmf/transport.o 00:04:17.572 CC lib/scsi/task.o 00:04:17.572 CC lib/ftl/ftl_writer.o 00:04:17.572 CC lib/ftl/ftl_rq.o 00:04:17.572 CC lib/nvmf/tcp.o 00:04:17.572 LIB libspdk_scsi.a 00:04:17.572 CC lib/nvmf/stubs.o 00:04:17.572 CC lib/nvmf/mdns_server.o 00:04:17.572 SO libspdk_scsi.so.9.0 00:04:17.830 SYMLINK libspdk_scsi.so 00:04:17.830 CC lib/nvmf/rdma.o 00:04:18.089 CC lib/nvmf/auth.o 00:04:18.089 CC lib/ftl/ftl_reloc.o 00:04:18.089 CC lib/ftl/ftl_l2p_cache.o 00:04:18.089 CC lib/ftl/ftl_p2l.o 00:04:18.089 CC lib/iscsi/conn.o 00:04:18.089 CC lib/vhost/vhost.o 00:04:18.347 CC lib/iscsi/init_grp.o 00:04:18.347 CC lib/iscsi/iscsi.o 00:04:18.606 CC lib/iscsi/param.o 00:04:18.606 CC lib/vhost/vhost_rpc.o 00:04:18.606 CC lib/iscsi/portal_grp.o 00:04:18.606 CC lib/vhost/vhost_scsi.o 00:04:18.606 CC lib/ftl/ftl_p2l_log.o 00:04:18.606 CC lib/vhost/vhost_blk.o 00:04:18.864 CC lib/vhost/rte_vhost_user.o 00:04:18.864 CC lib/iscsi/tgt_node.o 00:04:18.864 CC lib/iscsi/iscsi_subsystem.o 00:04:18.864 CC lib/ftl/mngt/ftl_mngt.o 00:04:18.864 CC lib/iscsi/iscsi_rpc.o 00:04:19.122 CC lib/iscsi/task.o 00:04:19.123 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:19.123 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:19.123 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:19.123 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:19.381 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:19.381 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:19.381 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:19.381 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:19.381 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:19.381 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:19.381 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:19.381 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:19.381 CC lib/ftl/utils/ftl_conf.o 00:04:19.381 LIB libspdk_nvmf.a 00:04:19.640 CC lib/ftl/utils/ftl_md.o 00:04:19.640 LIB libspdk_vhost.a 00:04:19.640 CC lib/ftl/utils/ftl_mempool.o 00:04:19.640 SO libspdk_vhost.so.8.0 00:04:19.640 CC lib/ftl/utils/ftl_bitmap.o 00:04:19.640 CC lib/ftl/utils/ftl_property.o 00:04:19.640 SO libspdk_nvmf.so.19.0 00:04:19.640 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:19.640 SYMLINK libspdk_vhost.so 00:04:19.640 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:19.640 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:19.640 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:19.640 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:19.898 SYMLINK libspdk_nvmf.so 00:04:19.898 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:19.898 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:19.898 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:19.898 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:19.898 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:19.898 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:19.898 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:19.898 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:19.898 CC lib/ftl/base/ftl_base_dev.o 00:04:19.898 LIB libspdk_iscsi.a 00:04:19.898 CC lib/ftl/base/ftl_base_bdev.o 00:04:19.898 CC lib/ftl/ftl_trace.o 00:04:19.898 SO libspdk_iscsi.so.8.0 00:04:20.157 SYMLINK libspdk_iscsi.so 00:04:20.157 LIB libspdk_ftl.a 00:04:20.415 SO libspdk_ftl.so.9.0 00:04:20.675 SYMLINK libspdk_ftl.so 00:04:20.933 CC module/env_dpdk/env_dpdk_rpc.o 00:04:20.933 CC module/sock/posix/posix.o 00:04:20.933 CC module/accel/error/accel_error.o 00:04:20.933 CC module/keyring/linux/keyring.o 00:04:20.933 CC module/fsdev/aio/fsdev_aio.o 00:04:20.933 CC module/accel/ioat/accel_ioat.o 00:04:20.933 CC module/blob/bdev/blob_bdev.o 00:04:20.933 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:20.933 CC module/keyring/file/keyring.o 00:04:20.933 CC module/accel/dsa/accel_dsa.o 00:04:20.933 LIB libspdk_env_dpdk_rpc.a 00:04:20.933 SO libspdk_env_dpdk_rpc.so.6.0 00:04:20.933 SYMLINK libspdk_env_dpdk_rpc.so 00:04:20.933 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:20.933 CC module/accel/error/accel_error_rpc.o 00:04:20.933 CC module/keyring/linux/keyring_rpc.o 00:04:20.933 CC module/keyring/file/keyring_rpc.o 00:04:21.192 CC module/accel/ioat/accel_ioat_rpc.o 00:04:21.192 LIB libspdk_scheduler_dynamic.a 00:04:21.192 CC module/accel/dsa/accel_dsa_rpc.o 00:04:21.192 SO libspdk_scheduler_dynamic.so.4.0 00:04:21.192 LIB libspdk_accel_error.a 00:04:21.192 LIB libspdk_keyring_linux.a 00:04:21.192 SO libspdk_accel_error.so.2.0 00:04:21.192 SO libspdk_keyring_linux.so.1.0 00:04:21.192 LIB libspdk_keyring_file.a 00:04:21.192 LIB libspdk_blob_bdev.a 00:04:21.192 SYMLINK libspdk_scheduler_dynamic.so 00:04:21.192 SO libspdk_keyring_file.so.2.0 00:04:21.192 LIB libspdk_accel_ioat.a 00:04:21.192 SO libspdk_blob_bdev.so.11.0 00:04:21.192 SYMLINK libspdk_accel_error.so 00:04:21.192 CC module/fsdev/aio/linux_aio_mgr.o 00:04:21.192 SO libspdk_accel_ioat.so.6.0 00:04:21.192 SYMLINK libspdk_keyring_linux.so 00:04:21.192 SYMLINK libspdk_blob_bdev.so 00:04:21.192 SYMLINK libspdk_keyring_file.so 00:04:21.192 LIB libspdk_accel_dsa.a 00:04:21.192 SYMLINK libspdk_accel_ioat.so 00:04:21.192 SO libspdk_accel_dsa.so.5.0 00:04:21.451 SYMLINK libspdk_accel_dsa.so 00:04:21.451 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:21.451 CC module/accel/iaa/accel_iaa.o 00:04:21.451 CC module/accel/iaa/accel_iaa_rpc.o 00:04:21.451 CC module/scheduler/gscheduler/gscheduler.o 00:04:21.451 LIB libspdk_fsdev_aio.a 00:04:21.451 CC module/bdev/delay/vbdev_delay.o 00:04:21.451 CC module/bdev/error/vbdev_error.o 00:04:21.451 SO libspdk_fsdev_aio.so.1.0 00:04:21.451 CC module/bdev/gpt/gpt.o 00:04:21.451 LIB libspdk_scheduler_dpdk_governor.a 00:04:21.451 CC module/blobfs/bdev/blobfs_bdev.o 00:04:21.451 LIB libspdk_sock_posix.a 00:04:21.451 LIB libspdk_scheduler_gscheduler.a 00:04:21.451 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:21.451 CC module/bdev/gpt/vbdev_gpt.o 00:04:21.451 LIB libspdk_accel_iaa.a 00:04:21.451 SO libspdk_sock_posix.so.6.0 00:04:21.451 SO libspdk_scheduler_gscheduler.so.4.0 00:04:21.451 SYMLINK libspdk_fsdev_aio.so 00:04:21.451 SO libspdk_accel_iaa.so.3.0 00:04:21.451 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:21.710 SYMLINK libspdk_scheduler_gscheduler.so 00:04:21.710 CC module/bdev/error/vbdev_error_rpc.o 00:04:21.710 SYMLINK libspdk_accel_iaa.so 00:04:21.710 SYMLINK libspdk_sock_posix.so 00:04:21.710 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:21.710 CC module/bdev/lvol/vbdev_lvol.o 00:04:21.710 CC module/bdev/malloc/bdev_malloc.o 00:04:21.710 CC module/bdev/nvme/bdev_nvme.o 00:04:21.710 CC module/bdev/null/bdev_null.o 00:04:21.710 CC module/bdev/null/bdev_null_rpc.o 00:04:21.710 LIB libspdk_bdev_error.a 00:04:21.710 SO libspdk_bdev_error.so.6.0 00:04:21.710 LIB libspdk_blobfs_bdev.a 00:04:21.710 LIB libspdk_bdev_gpt.a 00:04:21.710 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:21.710 SO libspdk_bdev_gpt.so.6.0 00:04:21.710 SO libspdk_blobfs_bdev.so.6.0 00:04:21.710 CC module/bdev/passthru/vbdev_passthru.o 00:04:21.710 SYMLINK libspdk_bdev_error.so 00:04:21.710 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:21.968 SYMLINK libspdk_bdev_gpt.so 00:04:21.968 SYMLINK libspdk_blobfs_bdev.so 00:04:21.968 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:21.968 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:21.968 LIB libspdk_bdev_null.a 00:04:21.968 LIB libspdk_bdev_delay.a 00:04:21.968 SO libspdk_bdev_null.so.6.0 00:04:21.968 CC module/bdev/raid/bdev_raid.o 00:04:21.968 SO libspdk_bdev_delay.so.6.0 00:04:21.968 SYMLINK libspdk_bdev_null.so 00:04:21.968 SYMLINK libspdk_bdev_delay.so 00:04:21.968 LIB libspdk_bdev_malloc.a 00:04:21.968 SO libspdk_bdev_malloc.so.6.0 00:04:22.227 LIB libspdk_bdev_passthru.a 00:04:22.227 CC module/bdev/split/vbdev_split.o 00:04:22.227 CC module/bdev/xnvme/bdev_xnvme.o 00:04:22.227 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:22.227 SO libspdk_bdev_passthru.so.6.0 00:04:22.227 SYMLINK libspdk_bdev_malloc.so 00:04:22.227 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:22.227 CC module/bdev/aio/bdev_aio.o 00:04:22.227 LIB libspdk_bdev_lvol.a 00:04:22.227 SYMLINK libspdk_bdev_passthru.so 00:04:22.227 CC module/bdev/aio/bdev_aio_rpc.o 00:04:22.227 SO libspdk_bdev_lvol.so.6.0 00:04:22.227 CC module/bdev/ftl/bdev_ftl.o 00:04:22.227 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:22.227 SYMLINK libspdk_bdev_lvol.so 00:04:22.227 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:22.227 CC module/bdev/split/vbdev_split_rpc.o 00:04:22.487 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:22.487 CC module/bdev/iscsi/bdev_iscsi.o 00:04:22.487 CC module/bdev/raid/bdev_raid_rpc.o 00:04:22.487 LIB libspdk_bdev_zone_block.a 00:04:22.487 LIB libspdk_bdev_aio.a 00:04:22.487 SO libspdk_bdev_zone_block.so.6.0 00:04:22.487 LIB libspdk_bdev_split.a 00:04:22.487 LIB libspdk_bdev_xnvme.a 00:04:22.487 SO libspdk_bdev_aio.so.6.0 00:04:22.487 SO libspdk_bdev_split.so.6.0 00:04:22.487 LIB libspdk_bdev_ftl.a 00:04:22.487 SO libspdk_bdev_xnvme.so.3.0 00:04:22.487 SYMLINK libspdk_bdev_zone_block.so 00:04:22.487 SO libspdk_bdev_ftl.so.6.0 00:04:22.487 SYMLINK libspdk_bdev_aio.so 00:04:22.487 CC module/bdev/nvme/nvme_rpc.o 00:04:22.487 SYMLINK libspdk_bdev_xnvme.so 00:04:22.487 SYMLINK libspdk_bdev_split.so 00:04:22.487 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:22.487 CC module/bdev/raid/bdev_raid_sb.o 00:04:22.487 SYMLINK libspdk_bdev_ftl.so 00:04:22.487 CC module/bdev/nvme/bdev_mdns_client.o 00:04:22.746 CC module/bdev/raid/raid0.o 00:04:22.746 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:22.746 CC module/bdev/raid/raid1.o 00:04:22.746 CC module/bdev/nvme/vbdev_opal.o 00:04:22.746 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:22.746 LIB libspdk_bdev_iscsi.a 00:04:22.746 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:22.746 SO libspdk_bdev_iscsi.so.6.0 00:04:22.746 SYMLINK libspdk_bdev_iscsi.so 00:04:22.746 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:22.746 CC module/bdev/raid/concat.o 00:04:23.006 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:23.006 LIB libspdk_bdev_raid.a 00:04:23.006 LIB libspdk_bdev_virtio.a 00:04:23.006 SO libspdk_bdev_raid.so.6.0 00:04:23.265 SO libspdk_bdev_virtio.so.6.0 00:04:23.265 SYMLINK libspdk_bdev_raid.so 00:04:23.265 SYMLINK libspdk_bdev_virtio.so 00:04:23.831 LIB libspdk_bdev_nvme.a 00:04:23.831 SO libspdk_bdev_nvme.so.7.0 00:04:23.831 SYMLINK libspdk_bdev_nvme.so 00:04:24.398 CC module/event/subsystems/keyring/keyring.o 00:04:24.398 CC module/event/subsystems/scheduler/scheduler.o 00:04:24.398 CC module/event/subsystems/fsdev/fsdev.o 00:04:24.398 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:24.398 CC module/event/subsystems/iobuf/iobuf.o 00:04:24.398 CC module/event/subsystems/sock/sock.o 00:04:24.398 CC module/event/subsystems/vmd/vmd.o 00:04:24.398 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:24.398 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:24.398 LIB libspdk_event_sock.a 00:04:24.398 LIB libspdk_event_fsdev.a 00:04:24.398 LIB libspdk_event_keyring.a 00:04:24.398 LIB libspdk_event_vhost_blk.a 00:04:24.398 SO libspdk_event_sock.so.5.0 00:04:24.398 SO libspdk_event_keyring.so.1.0 00:04:24.398 SO libspdk_event_fsdev.so.1.0 00:04:24.398 LIB libspdk_event_scheduler.a 00:04:24.398 LIB libspdk_event_vmd.a 00:04:24.398 SO libspdk_event_vhost_blk.so.3.0 00:04:24.398 SO libspdk_event_scheduler.so.4.0 00:04:24.398 SO libspdk_event_vmd.so.6.0 00:04:24.398 SYMLINK libspdk_event_keyring.so 00:04:24.398 SYMLINK libspdk_event_fsdev.so 00:04:24.398 LIB libspdk_event_iobuf.a 00:04:24.398 SYMLINK libspdk_event_sock.so 00:04:24.398 SYMLINK libspdk_event_vhost_blk.so 00:04:24.398 SYMLINK libspdk_event_scheduler.so 00:04:24.398 SO libspdk_event_iobuf.so.3.0 00:04:24.398 SYMLINK libspdk_event_vmd.so 00:04:24.398 SYMLINK libspdk_event_iobuf.so 00:04:24.657 CC module/event/subsystems/accel/accel.o 00:04:24.915 LIB libspdk_event_accel.a 00:04:24.915 SO libspdk_event_accel.so.6.0 00:04:24.915 SYMLINK libspdk_event_accel.so 00:04:25.173 CC module/event/subsystems/bdev/bdev.o 00:04:25.173 LIB libspdk_event_bdev.a 00:04:25.431 SO libspdk_event_bdev.so.6.0 00:04:25.431 SYMLINK libspdk_event_bdev.so 00:04:25.431 CC module/event/subsystems/nbd/nbd.o 00:04:25.431 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:25.431 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:25.431 CC module/event/subsystems/scsi/scsi.o 00:04:25.431 CC module/event/subsystems/ublk/ublk.o 00:04:25.690 LIB libspdk_event_scsi.a 00:04:25.690 LIB libspdk_event_ublk.a 00:04:25.690 LIB libspdk_event_nbd.a 00:04:25.690 SO libspdk_event_scsi.so.6.0 00:04:25.690 SO libspdk_event_ublk.so.3.0 00:04:25.690 SO libspdk_event_nbd.so.6.0 00:04:25.690 LIB libspdk_event_nvmf.a 00:04:25.690 SYMLINK libspdk_event_scsi.so 00:04:25.690 SO libspdk_event_nvmf.so.6.0 00:04:25.690 SYMLINK libspdk_event_nbd.so 00:04:25.690 SYMLINK libspdk_event_ublk.so 00:04:25.690 SYMLINK libspdk_event_nvmf.so 00:04:25.955 CC module/event/subsystems/iscsi/iscsi.o 00:04:25.955 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:25.955 LIB libspdk_event_vhost_scsi.a 00:04:25.955 SO libspdk_event_vhost_scsi.so.3.0 00:04:25.955 LIB libspdk_event_iscsi.a 00:04:25.955 SO libspdk_event_iscsi.so.6.0 00:04:25.955 SYMLINK libspdk_event_vhost_scsi.so 00:04:26.225 SYMLINK libspdk_event_iscsi.so 00:04:26.225 SO libspdk.so.6.0 00:04:26.225 SYMLINK libspdk.so 00:04:26.483 CC app/trace_record/trace_record.o 00:04:26.483 CXX app/trace/trace.o 00:04:26.483 CC app/spdk_nvme_identify/identify.o 00:04:26.483 CC app/spdk_nvme_perf/perf.o 00:04:26.483 CC app/spdk_lspci/spdk_lspci.o 00:04:26.483 CC app/nvmf_tgt/nvmf_main.o 00:04:26.483 CC app/iscsi_tgt/iscsi_tgt.o 00:04:26.483 CC test/thread/poller_perf/poller_perf.o 00:04:26.483 CC app/spdk_tgt/spdk_tgt.o 00:04:26.483 CC examples/util/zipf/zipf.o 00:04:26.483 LINK spdk_lspci 00:04:26.483 LINK nvmf_tgt 00:04:26.483 LINK poller_perf 00:04:26.483 LINK zipf 00:04:26.483 LINK iscsi_tgt 00:04:26.483 LINK spdk_trace_record 00:04:26.741 LINK spdk_tgt 00:04:26.741 LINK spdk_trace 00:04:26.741 CC app/spdk_nvme_discover/discovery_aer.o 00:04:26.741 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:26.741 CC examples/ioat/perf/perf.o 00:04:26.741 CC examples/ioat/verify/verify.o 00:04:26.741 CC test/dma/test_dma/test_dma.o 00:04:26.741 CC app/spdk_top/spdk_top.o 00:04:26.999 CC test/app/bdev_svc/bdev_svc.o 00:04:26.999 LINK spdk_nvme_discover 00:04:26.999 LINK interrupt_tgt 00:04:26.999 LINK verify 00:04:26.999 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:26.999 LINK ioat_perf 00:04:26.999 LINK spdk_nvme_perf 00:04:26.999 LINK bdev_svc 00:04:26.999 CC test/app/jsoncat/jsoncat.o 00:04:26.999 CC test/app/histogram_perf/histogram_perf.o 00:04:27.257 TEST_HEADER include/spdk/accel.h 00:04:27.257 TEST_HEADER include/spdk/accel_module.h 00:04:27.257 TEST_HEADER include/spdk/assert.h 00:04:27.257 TEST_HEADER include/spdk/barrier.h 00:04:27.257 TEST_HEADER include/spdk/base64.h 00:04:27.257 TEST_HEADER include/spdk/bdev.h 00:04:27.257 TEST_HEADER include/spdk/bdev_module.h 00:04:27.257 TEST_HEADER include/spdk/bdev_zone.h 00:04:27.257 TEST_HEADER include/spdk/bit_array.h 00:04:27.257 TEST_HEADER include/spdk/bit_pool.h 00:04:27.257 TEST_HEADER include/spdk/blob_bdev.h 00:04:27.257 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:27.257 TEST_HEADER include/spdk/blobfs.h 00:04:27.257 TEST_HEADER include/spdk/blob.h 00:04:27.257 TEST_HEADER include/spdk/conf.h 00:04:27.257 TEST_HEADER include/spdk/config.h 00:04:27.257 TEST_HEADER include/spdk/cpuset.h 00:04:27.257 TEST_HEADER include/spdk/crc16.h 00:04:27.257 TEST_HEADER include/spdk/crc32.h 00:04:27.257 TEST_HEADER include/spdk/crc64.h 00:04:27.257 TEST_HEADER include/spdk/dif.h 00:04:27.257 TEST_HEADER include/spdk/dma.h 00:04:27.257 TEST_HEADER include/spdk/endian.h 00:04:27.257 TEST_HEADER include/spdk/env_dpdk.h 00:04:27.257 TEST_HEADER include/spdk/env.h 00:04:27.257 TEST_HEADER include/spdk/event.h 00:04:27.257 TEST_HEADER include/spdk/fd_group.h 00:04:27.257 TEST_HEADER include/spdk/fd.h 00:04:27.257 TEST_HEADER include/spdk/file.h 00:04:27.257 TEST_HEADER include/spdk/fsdev.h 00:04:27.257 TEST_HEADER include/spdk/fsdev_module.h 00:04:27.257 TEST_HEADER include/spdk/ftl.h 00:04:27.257 LINK spdk_nvme_identify 00:04:27.257 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:27.257 TEST_HEADER include/spdk/gpt_spec.h 00:04:27.257 TEST_HEADER include/spdk/hexlify.h 00:04:27.257 TEST_HEADER include/spdk/histogram_data.h 00:04:27.257 TEST_HEADER include/spdk/idxd.h 00:04:27.257 TEST_HEADER include/spdk/idxd_spec.h 00:04:27.257 TEST_HEADER include/spdk/init.h 00:04:27.257 TEST_HEADER include/spdk/ioat.h 00:04:27.257 TEST_HEADER include/spdk/ioat_spec.h 00:04:27.257 TEST_HEADER include/spdk/iscsi_spec.h 00:04:27.257 TEST_HEADER include/spdk/json.h 00:04:27.257 TEST_HEADER include/spdk/jsonrpc.h 00:04:27.257 TEST_HEADER include/spdk/keyring.h 00:04:27.257 TEST_HEADER include/spdk/keyring_module.h 00:04:27.257 TEST_HEADER include/spdk/likely.h 00:04:27.257 TEST_HEADER include/spdk/log.h 00:04:27.257 TEST_HEADER include/spdk/lvol.h 00:04:27.257 TEST_HEADER include/spdk/md5.h 00:04:27.257 TEST_HEADER include/spdk/memory.h 00:04:27.257 TEST_HEADER include/spdk/mmio.h 00:04:27.257 TEST_HEADER include/spdk/nbd.h 00:04:27.257 TEST_HEADER include/spdk/net.h 00:04:27.257 TEST_HEADER include/spdk/notify.h 00:04:27.257 TEST_HEADER include/spdk/nvme.h 00:04:27.257 TEST_HEADER include/spdk/nvme_intel.h 00:04:27.257 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:27.257 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:27.257 TEST_HEADER include/spdk/nvme_spec.h 00:04:27.257 TEST_HEADER include/spdk/nvme_zns.h 00:04:27.257 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:27.257 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:27.257 TEST_HEADER include/spdk/nvmf.h 00:04:27.257 LINK histogram_perf 00:04:27.257 TEST_HEADER include/spdk/nvmf_spec.h 00:04:27.257 TEST_HEADER include/spdk/nvmf_transport.h 00:04:27.257 TEST_HEADER include/spdk/opal.h 00:04:27.257 TEST_HEADER include/spdk/opal_spec.h 00:04:27.257 TEST_HEADER include/spdk/pci_ids.h 00:04:27.257 TEST_HEADER include/spdk/pipe.h 00:04:27.257 TEST_HEADER include/spdk/queue.h 00:04:27.257 TEST_HEADER include/spdk/reduce.h 00:04:27.257 TEST_HEADER include/spdk/rpc.h 00:04:27.257 TEST_HEADER include/spdk/scheduler.h 00:04:27.257 TEST_HEADER include/spdk/scsi.h 00:04:27.257 TEST_HEADER include/spdk/scsi_spec.h 00:04:27.257 TEST_HEADER include/spdk/sock.h 00:04:27.257 LINK jsoncat 00:04:27.257 TEST_HEADER include/spdk/stdinc.h 00:04:27.257 TEST_HEADER include/spdk/string.h 00:04:27.257 TEST_HEADER include/spdk/thread.h 00:04:27.257 TEST_HEADER include/spdk/trace.h 00:04:27.257 TEST_HEADER include/spdk/trace_parser.h 00:04:27.257 TEST_HEADER include/spdk/tree.h 00:04:27.257 TEST_HEADER include/spdk/ublk.h 00:04:27.257 TEST_HEADER include/spdk/util.h 00:04:27.258 TEST_HEADER include/spdk/uuid.h 00:04:27.258 TEST_HEADER include/spdk/version.h 00:04:27.258 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:27.258 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:27.258 TEST_HEADER include/spdk/vhost.h 00:04:27.258 TEST_HEADER include/spdk/vmd.h 00:04:27.258 TEST_HEADER include/spdk/xor.h 00:04:27.258 TEST_HEADER include/spdk/zipf.h 00:04:27.258 CXX test/cpp_headers/accel.o 00:04:27.258 CC examples/sock/hello_world/hello_sock.o 00:04:27.258 CC examples/thread/thread/thread_ex.o 00:04:27.258 LINK test_dma 00:04:27.258 CC app/spdk_dd/spdk_dd.o 00:04:27.516 CC test/env/mem_callbacks/mem_callbacks.o 00:04:27.516 LINK nvme_fuzz 00:04:27.516 CXX test/cpp_headers/accel_module.o 00:04:27.516 CXX test/cpp_headers/assert.o 00:04:27.516 CC app/vhost/vhost.o 00:04:27.517 CC app/fio/nvme/fio_plugin.o 00:04:27.517 LINK thread 00:04:27.517 LINK hello_sock 00:04:27.517 LINK mem_callbacks 00:04:27.517 CXX test/cpp_headers/barrier.o 00:04:27.517 LINK vhost 00:04:27.775 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:27.775 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:27.775 CC test/env/vtophys/vtophys.o 00:04:27.775 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:27.775 LINK spdk_dd 00:04:27.775 CXX test/cpp_headers/base64.o 00:04:27.775 CXX test/cpp_headers/bdev.o 00:04:27.775 LINK spdk_top 00:04:27.775 LINK vtophys 00:04:27.775 CC examples/vmd/lsvmd/lsvmd.o 00:04:27.775 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:28.033 CXX test/cpp_headers/bdev_module.o 00:04:28.033 CC test/env/memory/memory_ut.o 00:04:28.033 CXX test/cpp_headers/bdev_zone.o 00:04:28.033 LINK spdk_nvme 00:04:28.033 CXX test/cpp_headers/bit_array.o 00:04:28.033 CC test/env/pci/pci_ut.o 00:04:28.033 LINK lsvmd 00:04:28.033 LINK env_dpdk_post_init 00:04:28.033 CC app/fio/bdev/fio_plugin.o 00:04:28.033 CXX test/cpp_headers/bit_pool.o 00:04:28.033 CXX test/cpp_headers/blob_bdev.o 00:04:28.033 CC examples/vmd/led/led.o 00:04:28.033 CXX test/cpp_headers/blobfs_bdev.o 00:04:28.291 LINK vhost_fuzz 00:04:28.291 CC test/app/stub/stub.o 00:04:28.291 LINK led 00:04:28.291 CXX test/cpp_headers/blobfs.o 00:04:28.291 CXX test/cpp_headers/blob.o 00:04:28.291 LINK pci_ut 00:04:28.291 CXX test/cpp_headers/conf.o 00:04:28.291 LINK stub 00:04:28.291 CXX test/cpp_headers/config.o 00:04:28.550 CXX test/cpp_headers/cpuset.o 00:04:28.550 CC test/event/event_perf/event_perf.o 00:04:28.550 CC test/rpc_client/rpc_client_test.o 00:04:28.550 CC examples/idxd/perf/perf.o 00:04:28.551 LINK spdk_bdev 00:04:28.551 CC test/nvme/aer/aer.o 00:04:28.551 CC test/nvme/reset/reset.o 00:04:28.551 CC test/nvme/sgl/sgl.o 00:04:28.551 CXX test/cpp_headers/crc16.o 00:04:28.551 LINK event_perf 00:04:28.551 LINK rpc_client_test 00:04:28.551 CC test/nvme/e2edp/nvme_dp.o 00:04:28.809 LINK memory_ut 00:04:28.809 CXX test/cpp_headers/crc32.o 00:04:28.809 LINK sgl 00:04:28.810 CC test/event/reactor/reactor.o 00:04:28.810 LINK reset 00:04:28.810 LINK idxd_perf 00:04:28.810 LINK aer 00:04:28.810 CC test/event/reactor_perf/reactor_perf.o 00:04:28.810 CXX test/cpp_headers/crc64.o 00:04:28.810 LINK nvme_dp 00:04:29.068 LINK reactor 00:04:29.068 LINK reactor_perf 00:04:29.068 CC test/event/app_repeat/app_repeat.o 00:04:29.068 CC test/nvme/overhead/overhead.o 00:04:29.068 CXX test/cpp_headers/dif.o 00:04:29.068 CC test/event/scheduler/scheduler.o 00:04:29.068 CC test/nvme/err_injection/err_injection.o 00:04:29.068 CXX test/cpp_headers/dma.o 00:04:29.068 CC examples/accel/perf/accel_perf.o 00:04:29.068 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:29.068 CC test/nvme/startup/startup.o 00:04:29.068 LINK app_repeat 00:04:29.068 CXX test/cpp_headers/endian.o 00:04:29.326 CC test/nvme/reserve/reserve.o 00:04:29.326 LINK err_injection 00:04:29.326 LINK scheduler 00:04:29.326 LINK overhead 00:04:29.326 LINK startup 00:04:29.326 CXX test/cpp_headers/env_dpdk.o 00:04:29.326 CC test/nvme/simple_copy/simple_copy.o 00:04:29.326 LINK hello_fsdev 00:04:29.326 CXX test/cpp_headers/env.o 00:04:29.326 CXX test/cpp_headers/event.o 00:04:29.326 CXX test/cpp_headers/fd_group.o 00:04:29.326 LINK reserve 00:04:29.326 LINK iscsi_fuzz 00:04:29.585 CXX test/cpp_headers/fd.o 00:04:29.585 LINK accel_perf 00:04:29.585 CXX test/cpp_headers/file.o 00:04:29.585 LINK simple_copy 00:04:29.585 CC examples/blob/cli/blobcli.o 00:04:29.585 CC examples/blob/hello_world/hello_blob.o 00:04:29.585 CC test/accel/dif/dif.o 00:04:29.585 CC test/nvme/boot_partition/boot_partition.o 00:04:29.585 CC test/nvme/connect_stress/connect_stress.o 00:04:29.585 CXX test/cpp_headers/fsdev.o 00:04:29.844 LINK connect_stress 00:04:29.844 CC examples/nvme/hello_world/hello_world.o 00:04:29.844 LINK hello_blob 00:04:29.844 CC test/nvme/compliance/nvme_compliance.o 00:04:29.844 CC test/nvme/fused_ordering/fused_ordering.o 00:04:29.844 LINK boot_partition 00:04:29.844 CXX test/cpp_headers/fsdev_module.o 00:04:29.844 CC test/blobfs/mkfs/mkfs.o 00:04:29.844 LINK fused_ordering 00:04:29.844 LINK hello_world 00:04:30.102 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:30.102 CXX test/cpp_headers/ftl.o 00:04:30.102 CC test/nvme/fdp/fdp.o 00:04:30.102 LINK mkfs 00:04:30.102 CC examples/bdev/hello_world/hello_bdev.o 00:04:30.102 LINK blobcli 00:04:30.102 CC test/nvme/cuse/cuse.o 00:04:30.102 LINK nvme_compliance 00:04:30.102 LINK doorbell_aers 00:04:30.102 CXX test/cpp_headers/fuse_dispatcher.o 00:04:30.102 CC examples/nvme/reconnect/reconnect.o 00:04:30.360 LINK hello_bdev 00:04:30.360 CXX test/cpp_headers/gpt_spec.o 00:04:30.360 LINK dif 00:04:30.360 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:30.360 CC examples/nvme/arbitration/arbitration.o 00:04:30.360 LINK fdp 00:04:30.360 CC examples/nvme/hotplug/hotplug.o 00:04:30.360 CXX test/cpp_headers/hexlify.o 00:04:30.360 CXX test/cpp_headers/histogram_data.o 00:04:30.360 CC test/lvol/esnap/esnap.o 00:04:30.618 CXX test/cpp_headers/idxd.o 00:04:30.618 LINK hotplug 00:04:30.618 LINK reconnect 00:04:30.618 CC examples/bdev/bdevperf/bdevperf.o 00:04:30.618 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:30.618 CXX test/cpp_headers/idxd_spec.o 00:04:30.618 LINK arbitration 00:04:30.618 CXX test/cpp_headers/init.o 00:04:30.618 LINK nvme_manage 00:04:30.618 CC examples/nvme/abort/abort.o 00:04:30.618 LINK cmb_copy 00:04:30.876 CXX test/cpp_headers/ioat.o 00:04:30.876 CXX test/cpp_headers/ioat_spec.o 00:04:30.876 CC test/bdev/bdevio/bdevio.o 00:04:30.876 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:30.876 CXX test/cpp_headers/iscsi_spec.o 00:04:30.876 CXX test/cpp_headers/json.o 00:04:30.876 CXX test/cpp_headers/jsonrpc.o 00:04:30.876 CXX test/cpp_headers/keyring.o 00:04:30.876 CXX test/cpp_headers/keyring_module.o 00:04:30.876 LINK pmr_persistence 00:04:30.876 CXX test/cpp_headers/likely.o 00:04:31.136 CXX test/cpp_headers/log.o 00:04:31.136 LINK abort 00:04:31.136 LINK cuse 00:04:31.136 CXX test/cpp_headers/lvol.o 00:04:31.136 CXX test/cpp_headers/md5.o 00:04:31.136 CXX test/cpp_headers/memory.o 00:04:31.136 CXX test/cpp_headers/mmio.o 00:04:31.136 CXX test/cpp_headers/nbd.o 00:04:31.136 CXX test/cpp_headers/net.o 00:04:31.136 CXX test/cpp_headers/notify.o 00:04:31.136 CXX test/cpp_headers/nvme.o 00:04:31.136 CXX test/cpp_headers/nvme_intel.o 00:04:31.136 LINK bdevio 00:04:31.136 CXX test/cpp_headers/nvme_ocssd.o 00:04:31.136 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:31.136 CXX test/cpp_headers/nvme_spec.o 00:04:31.394 CXX test/cpp_headers/nvme_zns.o 00:04:31.394 LINK bdevperf 00:04:31.394 CXX test/cpp_headers/nvmf_cmd.o 00:04:31.394 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:31.394 CXX test/cpp_headers/nvmf.o 00:04:31.394 CXX test/cpp_headers/nvmf_spec.o 00:04:31.394 CXX test/cpp_headers/nvmf_transport.o 00:04:31.394 CXX test/cpp_headers/opal.o 00:04:31.394 CXX test/cpp_headers/opal_spec.o 00:04:31.394 CXX test/cpp_headers/pci_ids.o 00:04:31.394 CXX test/cpp_headers/pipe.o 00:04:31.394 CXX test/cpp_headers/queue.o 00:04:31.394 CXX test/cpp_headers/reduce.o 00:04:31.394 CXX test/cpp_headers/rpc.o 00:04:31.394 CXX test/cpp_headers/scheduler.o 00:04:31.394 CXX test/cpp_headers/scsi.o 00:04:31.394 CXX test/cpp_headers/scsi_spec.o 00:04:31.653 CXX test/cpp_headers/sock.o 00:04:31.653 CXX test/cpp_headers/stdinc.o 00:04:31.653 CXX test/cpp_headers/string.o 00:04:31.653 CC examples/nvmf/nvmf/nvmf.o 00:04:31.653 CXX test/cpp_headers/thread.o 00:04:31.653 CXX test/cpp_headers/trace.o 00:04:31.653 CXX test/cpp_headers/trace_parser.o 00:04:31.653 CXX test/cpp_headers/tree.o 00:04:31.653 CXX test/cpp_headers/ublk.o 00:04:31.653 CXX test/cpp_headers/util.o 00:04:31.653 CXX test/cpp_headers/uuid.o 00:04:31.653 CXX test/cpp_headers/version.o 00:04:31.653 CXX test/cpp_headers/vfio_user_pci.o 00:04:31.653 CXX test/cpp_headers/vfio_user_spec.o 00:04:31.653 CXX test/cpp_headers/vhost.o 00:04:31.653 CXX test/cpp_headers/vmd.o 00:04:31.653 CXX test/cpp_headers/xor.o 00:04:31.653 CXX test/cpp_headers/zipf.o 00:04:31.911 LINK nvmf 00:04:35.195 LINK esnap 00:04:35.195 00:04:35.195 real 1m2.611s 00:04:35.195 user 5m12.484s 00:04:35.195 sys 0m50.842s 00:04:35.195 05:57:00 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:35.195 05:57:00 make -- common/autotest_common.sh@10 -- $ set +x 00:04:35.195 ************************************ 00:04:35.195 END TEST make 00:04:35.195 ************************************ 00:04:35.195 05:57:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:35.195 05:57:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:35.195 05:57:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:35.195 05:57:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.195 05:57:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:35.195 05:57:00 -- pm/common@44 -- $ pid=5789 00:04:35.195 05:57:00 -- pm/common@50 -- $ kill -TERM 5789 00:04:35.195 05:57:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.195 05:57:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:35.195 05:57:00 -- pm/common@44 -- $ pid=5790 00:04:35.195 05:57:00 -- pm/common@50 -- $ kill -TERM 5790 00:04:35.195 05:57:00 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.195 05:57:00 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.195 05:57:00 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.195 05:57:00 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.195 05:57:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.195 05:57:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.195 05:57:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.195 05:57:00 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.195 05:57:00 -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.195 05:57:00 -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.195 05:57:00 -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.195 05:57:00 -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.195 05:57:00 -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.195 05:57:00 -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.195 05:57:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.195 05:57:00 -- scripts/common.sh@344 -- # case "$op" in 00:04:35.195 05:57:00 -- scripts/common.sh@345 -- # : 1 00:04:35.195 05:57:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.195 05:57:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.195 05:57:00 -- scripts/common.sh@365 -- # decimal 1 00:04:35.195 05:57:00 -- scripts/common.sh@353 -- # local d=1 00:04:35.195 05:57:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.195 05:57:00 -- scripts/common.sh@355 -- # echo 1 00:04:35.195 05:57:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.195 05:57:00 -- scripts/common.sh@366 -- # decimal 2 00:04:35.195 05:57:00 -- scripts/common.sh@353 -- # local d=2 00:04:35.195 05:57:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.195 05:57:00 -- scripts/common.sh@355 -- # echo 2 00:04:35.195 05:57:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.195 05:57:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.195 05:57:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.195 05:57:00 -- scripts/common.sh@368 -- # return 0 00:04:35.195 05:57:00 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.195 05:57:00 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.195 --rc genhtml_branch_coverage=1 00:04:35.195 --rc genhtml_function_coverage=1 00:04:35.195 --rc genhtml_legend=1 00:04:35.195 --rc geninfo_all_blocks=1 00:04:35.195 --rc geninfo_unexecuted_blocks=1 00:04:35.195 00:04:35.195 ' 00:04:35.195 05:57:00 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.195 --rc genhtml_branch_coverage=1 00:04:35.195 --rc genhtml_function_coverage=1 00:04:35.195 --rc genhtml_legend=1 00:04:35.195 --rc geninfo_all_blocks=1 00:04:35.195 --rc geninfo_unexecuted_blocks=1 00:04:35.195 00:04:35.195 ' 00:04:35.195 05:57:00 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.195 --rc genhtml_branch_coverage=1 00:04:35.195 --rc genhtml_function_coverage=1 00:04:35.195 --rc genhtml_legend=1 00:04:35.195 --rc geninfo_all_blocks=1 00:04:35.195 --rc geninfo_unexecuted_blocks=1 00:04:35.195 00:04:35.195 ' 00:04:35.195 05:57:00 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.195 --rc genhtml_branch_coverage=1 00:04:35.195 --rc genhtml_function_coverage=1 00:04:35.195 --rc genhtml_legend=1 00:04:35.195 --rc geninfo_all_blocks=1 00:04:35.195 --rc geninfo_unexecuted_blocks=1 00:04:35.195 00:04:35.195 ' 00:04:35.195 05:57:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.195 05:57:00 -- nvmf/common.sh@7 -- # uname -s 00:04:35.195 05:57:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.195 05:57:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.195 05:57:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.195 05:57:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.195 05:57:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.195 05:57:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.195 05:57:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.195 05:57:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.195 05:57:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.195 05:57:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.195 05:57:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c804c1ef-9fa5-4197-9d9f-38b72f371f25 00:04:35.195 05:57:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=c804c1ef-9fa5-4197-9d9f-38b72f371f25 00:04:35.195 05:57:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.195 05:57:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.195 05:57:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.195 05:57:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.195 05:57:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.195 05:57:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.195 05:57:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.195 05:57:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.195 05:57:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.195 05:57:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.195 05:57:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.196 05:57:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.196 05:57:00 -- paths/export.sh@5 -- # export PATH 00:04:35.196 05:57:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.196 05:57:00 -- nvmf/common.sh@51 -- # : 0 00:04:35.196 05:57:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.196 05:57:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.196 05:57:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.196 05:57:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.196 05:57:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.196 05:57:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.196 05:57:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.196 05:57:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.196 05:57:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.196 05:57:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:35.196 05:57:00 -- spdk/autotest.sh@32 -- # uname -s 00:04:35.196 05:57:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:35.196 05:57:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:35.196 05:57:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.196 05:57:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:35.196 05:57:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.196 05:57:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:35.196 05:57:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:35.196 05:57:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:35.196 05:57:00 -- spdk/autotest.sh@48 -- # udevadm_pid=66609 00:04:35.196 05:57:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:35.196 05:57:00 -- pm/common@17 -- # local monitor 00:04:35.196 05:57:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.196 05:57:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.196 05:57:00 -- pm/common@25 -- # sleep 1 00:04:35.196 05:57:00 -- pm/common@21 -- # date +%s 00:04:35.196 05:57:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:35.196 05:57:00 -- pm/common@21 -- # date +%s 00:04:35.196 05:57:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727762220 00:04:35.196 05:57:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727762220 00:04:35.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727762220_collect-cpu-load.pm.log 00:04:35.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727762220_collect-vmstat.pm.log 00:04:36.567 05:57:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:36.567 05:57:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:36.567 05:57:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.567 05:57:01 -- common/autotest_common.sh@10 -- # set +x 00:04:36.567 05:57:01 -- spdk/autotest.sh@59 -- # create_test_list 00:04:36.567 05:57:01 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:36.567 05:57:01 -- common/autotest_common.sh@10 -- # set +x 00:04:36.567 05:57:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:36.567 05:57:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:36.567 05:57:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:36.567 05:57:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:36.567 05:57:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:36.567 05:57:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:36.567 05:57:01 -- common/autotest_common.sh@1455 -- # uname 00:04:36.567 05:57:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:36.567 05:57:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:36.567 05:57:01 -- common/autotest_common.sh@1475 -- # uname 00:04:36.567 05:57:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:36.567 05:57:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:36.567 05:57:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:36.567 lcov: LCOV version 1.15 00:04:36.567 05:57:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:48.768 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:48.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:03.662 05:57:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:03.662 05:57:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.662 05:57:28 -- common/autotest_common.sh@10 -- # set +x 00:05:03.662 05:57:28 -- spdk/autotest.sh@78 -- # rm -f 00:05:03.662 05:57:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.662 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:03.662 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:03.662 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:03.662 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:03.662 05:57:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:03.662 05:57:29 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:03.662 05:57:29 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:03.662 05:57:29 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:03.662 05:57:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:03.662 05:57:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:03.662 05:57:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:03.662 05:57:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:03.662 05:57:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:03.662 05:57:29 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:03.662 05:57:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:03.662 05:57:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:03.662 05:57:29 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:03.662 05:57:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:03.662 05:57:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:03.662 05:57:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:03.662 05:57:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:03.662 05:57:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:03.662 05:57:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:03.662 05:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.662 05:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:03.662 05:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:03.662 05:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:03.662 05:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:03.923 No valid GPT data, bailing 00:05:03.923 05:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:03.923 05:57:29 -- scripts/common.sh@394 -- # pt= 00:05:03.923 05:57:29 -- scripts/common.sh@395 -- # return 1 00:05:03.924 05:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:03.924 1+0 records in 00:05:03.924 1+0 records out 00:05:03.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288155 s, 36.4 MB/s 00:05:03.924 05:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.924 05:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:03.924 05:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:03.924 05:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:03.924 05:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:03.924 No valid GPT data, bailing 00:05:03.924 05:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:03.924 05:57:29 -- scripts/common.sh@394 -- # pt= 00:05:03.924 05:57:29 -- scripts/common.sh@395 -- # return 1 00:05:03.924 05:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:03.924 1+0 records in 00:05:03.924 1+0 records out 00:05:03.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443946 s, 236 MB/s 00:05:03.924 05:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.924 05:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:03.924 05:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:03.924 05:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:03.924 05:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:03.924 No valid GPT data, bailing 00:05:03.924 05:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:04.185 05:57:29 -- scripts/common.sh@394 -- # pt= 00:05:04.185 05:57:29 -- scripts/common.sh@395 -- # return 1 00:05:04.185 05:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:04.185 1+0 records in 00:05:04.185 1+0 records out 00:05:04.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396237 s, 265 MB/s 00:05:04.185 05:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.185 05:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.185 05:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:04.185 05:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:04.185 05:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:04.185 No valid GPT data, bailing 00:05:04.185 05:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:04.185 05:57:29 -- scripts/common.sh@394 -- # pt= 00:05:04.185 05:57:29 -- scripts/common.sh@395 -- # return 1 00:05:04.185 05:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:04.185 1+0 records in 00:05:04.185 1+0 records out 00:05:04.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459613 s, 228 MB/s 00:05:04.185 05:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.185 05:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.185 05:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:04.185 05:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:04.185 05:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:04.185 No valid GPT data, bailing 00:05:04.185 05:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:04.185 05:57:29 -- scripts/common.sh@394 -- # pt= 00:05:04.185 05:57:29 -- scripts/common.sh@395 -- # return 1 00:05:04.185 05:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:04.185 1+0 records in 00:05:04.185 1+0 records out 00:05:04.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479514 s, 219 MB/s 00:05:04.185 05:57:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.185 05:57:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.185 05:57:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:04.185 05:57:29 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:04.185 05:57:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:04.185 No valid GPT data, bailing 00:05:04.185 05:57:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:04.185 05:57:29 -- scripts/common.sh@394 -- # pt= 00:05:04.185 05:57:29 -- scripts/common.sh@395 -- # return 1 00:05:04.185 05:57:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:04.185 1+0 records in 00:05:04.185 1+0 records out 00:05:04.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361491 s, 290 MB/s 00:05:04.185 05:57:29 -- spdk/autotest.sh@105 -- # sync 00:05:04.446 05:57:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.446 05:57:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.446 05:57:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:05.833 05:57:31 -- spdk/autotest.sh@111 -- # uname -s 00:05:05.833 05:57:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:05.833 05:57:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:05.833 05:57:31 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:06.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.665 Hugepages 00:05:06.665 node hugesize free / total 00:05:06.665 node0 1048576kB 0 / 0 00:05:06.665 node0 2048kB 0 / 0 00:05:06.665 00:05:06.665 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:06.926 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:06.926 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:06.926 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:06.926 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:07.191 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:07.191 05:57:32 -- spdk/autotest.sh@117 -- # uname -s 00:05:07.191 05:57:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:07.191 05:57:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:07.191 05:57:32 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.027 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.027 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.027 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.027 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.288 05:57:33 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:09.230 05:57:34 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:09.230 05:57:34 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:09.231 05:57:34 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.231 05:57:34 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:09.231 05:57:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:09.231 05:57:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:09.231 05:57:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.231 05:57:34 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.231 05:57:34 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:09.231 05:57:34 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:09.231 05:57:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:09.231 05:57:34 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.754 Waiting for block devices as requested 00:05:09.754 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.754 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.754 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.014 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.306 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:15.306 05:57:40 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:15.306 05:57:40 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:15.306 05:57:40 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:15.306 05:57:40 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:15.306 05:57:40 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:15.306 05:57:40 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:15.306 05:57:40 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:15.306 05:57:40 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:15.306 05:57:40 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1541 -- # continue 00:05:15.306 05:57:40 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:15.306 05:57:40 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:15.306 05:57:40 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:15.306 05:57:40 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:15.306 05:57:40 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1541 -- # continue 00:05:15.306 05:57:40 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:15.306 05:57:40 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:15.306 05:57:40 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:15.306 05:57:40 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:15.306 05:57:40 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1541 -- # continue 00:05:15.306 05:57:40 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:15.306 05:57:40 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:15.306 05:57:40 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:15.306 05:57:40 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:15.306 05:57:40 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:15.306 05:57:40 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:15.306 05:57:40 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:15.306 05:57:40 -- common/autotest_common.sh@1541 -- # continue 00:05:15.306 05:57:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:15.306 05:57:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.306 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 05:57:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:15.306 05:57:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.306 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.306 05:57:40 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.138 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.138 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.138 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.138 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.138 05:57:41 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:16.138 05:57:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.138 05:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:16.400 05:57:41 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:16.400 05:57:41 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:16.400 05:57:41 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:16.400 05:57:41 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:16.400 05:57:41 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:16.400 05:57:41 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:16.400 05:57:41 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:16.400 05:57:41 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:16.400 05:57:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:16.400 05:57:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:16.400 05:57:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.400 05:57:41 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:16.400 05:57:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:16.400 05:57:41 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:16.400 05:57:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:16.400 05:57:41 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:16.400 05:57:41 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:16.400 05:57:41 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:16.400 05:57:41 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:16.400 05:57:41 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:16.400 05:57:41 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:16.400 05:57:41 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:16.400 05:57:41 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:16.400 05:57:41 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:16.400 05:57:41 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:16.400 05:57:41 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:16.400 05:57:41 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:16.400 05:57:41 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:16.400 05:57:41 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:16.400 05:57:41 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:16.400 05:57:41 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:16.400 05:57:41 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:16.400 05:57:41 -- common/autotest_common.sh@1570 -- # return 0 00:05:16.400 05:57:41 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:16.400 05:57:41 -- common/autotest_common.sh@1578 -- # return 0 00:05:16.400 05:57:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:16.400 05:57:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:16.400 05:57:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:16.400 05:57:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:16.400 05:57:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:16.400 05:57:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.400 05:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:16.400 05:57:41 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:16.400 05:57:41 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:16.400 05:57:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.400 05:57:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.400 05:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:16.400 ************************************ 00:05:16.400 START TEST env 00:05:16.400 ************************************ 00:05:16.400 05:57:41 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:16.400 * Looking for test storage... 00:05:16.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:16.400 05:57:41 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.400 05:57:41 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.400 05:57:41 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.400 05:57:42 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.400 05:57:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.400 05:57:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.400 05:57:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.400 05:57:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.400 05:57:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.401 05:57:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.401 05:57:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.401 05:57:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.401 05:57:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.401 05:57:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.401 05:57:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.401 05:57:42 env -- scripts/common.sh@344 -- # case "$op" in 00:05:16.401 05:57:42 env -- scripts/common.sh@345 -- # : 1 00:05:16.401 05:57:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.401 05:57:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.401 05:57:42 env -- scripts/common.sh@365 -- # decimal 1 00:05:16.401 05:57:42 env -- scripts/common.sh@353 -- # local d=1 00:05:16.401 05:57:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.401 05:57:42 env -- scripts/common.sh@355 -- # echo 1 00:05:16.663 05:57:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.663 05:57:42 env -- scripts/common.sh@366 -- # decimal 2 00:05:16.663 05:57:42 env -- scripts/common.sh@353 -- # local d=2 00:05:16.663 05:57:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.663 05:57:42 env -- scripts/common.sh@355 -- # echo 2 00:05:16.663 05:57:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.663 05:57:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.663 05:57:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.663 05:57:42 env -- scripts/common.sh@368 -- # return 0 00:05:16.663 05:57:42 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.663 05:57:42 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.663 --rc genhtml_branch_coverage=1 00:05:16.663 --rc genhtml_function_coverage=1 00:05:16.663 --rc genhtml_legend=1 00:05:16.663 --rc geninfo_all_blocks=1 00:05:16.663 --rc geninfo_unexecuted_blocks=1 00:05:16.663 00:05:16.663 ' 00:05:16.663 05:57:42 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.663 --rc genhtml_branch_coverage=1 00:05:16.663 --rc genhtml_function_coverage=1 00:05:16.663 --rc genhtml_legend=1 00:05:16.663 --rc geninfo_all_blocks=1 00:05:16.663 --rc geninfo_unexecuted_blocks=1 00:05:16.663 00:05:16.663 ' 00:05:16.663 05:57:42 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.663 --rc genhtml_branch_coverage=1 00:05:16.663 --rc genhtml_function_coverage=1 00:05:16.663 --rc genhtml_legend=1 00:05:16.663 --rc geninfo_all_blocks=1 00:05:16.663 --rc geninfo_unexecuted_blocks=1 00:05:16.663 00:05:16.663 ' 00:05:16.663 05:57:42 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.663 --rc genhtml_branch_coverage=1 00:05:16.663 --rc genhtml_function_coverage=1 00:05:16.663 --rc genhtml_legend=1 00:05:16.663 --rc geninfo_all_blocks=1 00:05:16.663 --rc geninfo_unexecuted_blocks=1 00:05:16.663 00:05:16.663 ' 00:05:16.663 05:57:42 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:16.663 05:57:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.663 05:57:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.663 05:57:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.663 ************************************ 00:05:16.663 START TEST env_memory 00:05:16.663 ************************************ 00:05:16.663 05:57:42 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:16.663 00:05:16.663 00:05:16.663 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.663 http://cunit.sourceforge.net/ 00:05:16.663 00:05:16.663 00:05:16.663 Suite: memory 00:05:16.663 Test: alloc and free memory map ...[2024-10-01 05:57:42.079841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:16.663 passed 00:05:16.663 Test: mem map translation ...[2024-10-01 05:57:42.118830] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:16.663 [2024-10-01 05:57:42.118869] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:16.663 [2024-10-01 05:57:42.118926] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:16.663 [2024-10-01 05:57:42.118938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:16.663 passed 00:05:16.663 Test: mem map registration ...[2024-10-01 05:57:42.189573] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:16.663 [2024-10-01 05:57:42.189608] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:16.663 passed 00:05:16.925 Test: mem map adjacent registrations ...passed 00:05:16.925 00:05:16.925 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.925 suites 1 1 n/a 0 0 00:05:16.925 tests 4 4 4 0 0 00:05:16.925 asserts 152 152 152 0 n/a 00:05:16.925 00:05:16.925 Elapsed time = 0.237 seconds 00:05:16.925 00:05:16.925 real 0m0.270s 00:05:16.925 user 0m0.242s 00:05:16.925 sys 0m0.023s 00:05:16.925 05:57:42 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.925 05:57:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:16.925 ************************************ 00:05:16.925 END TEST env_memory 00:05:16.925 ************************************ 00:05:16.925 05:57:42 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:16.925 05:57:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.925 05:57:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.925 05:57:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.925 ************************************ 00:05:16.925 START TEST env_vtophys 00:05:16.925 ************************************ 00:05:16.925 05:57:42 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:16.925 EAL: lib.eal log level changed from notice to debug 00:05:16.925 EAL: Detected lcore 0 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 1 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 2 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 3 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 4 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 5 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 6 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 7 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 8 as core 0 on socket 0 00:05:16.925 EAL: Detected lcore 9 as core 0 on socket 0 00:05:16.925 EAL: Maximum logical cores by configuration: 128 00:05:16.925 EAL: Detected CPU lcores: 10 00:05:16.925 EAL: Detected NUMA nodes: 1 00:05:16.925 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:16.925 EAL: Detected shared linkage of DPDK 00:05:16.926 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:16.926 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:16.926 EAL: Registered [vdev] bus. 00:05:16.926 EAL: bus.vdev log level changed from disabled to notice 00:05:16.926 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:16.926 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:16.926 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:16.926 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:16.926 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:16.926 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:16.926 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:16.926 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:16.926 EAL: No shared files mode enabled, IPC will be disabled 00:05:16.926 EAL: No shared files mode enabled, IPC is disabled 00:05:16.926 EAL: Selected IOVA mode 'PA' 00:05:16.926 EAL: Probing VFIO support... 00:05:16.926 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:16.926 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:16.926 EAL: Ask a virtual area of 0x2e000 bytes 00:05:16.926 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:16.926 EAL: Setting up physically contiguous memory... 00:05:16.926 EAL: Setting maximum number of open files to 524288 00:05:16.926 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:16.926 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:16.926 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.926 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:16.926 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.926 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.926 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:16.926 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:16.926 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.926 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:16.926 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.926 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.926 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:16.926 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:16.926 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.926 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:16.926 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.926 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.926 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:16.926 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:16.926 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.926 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:16.926 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.926 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.926 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:16.926 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:16.926 EAL: Hugepages will be freed exactly as allocated. 00:05:16.926 EAL: No shared files mode enabled, IPC is disabled 00:05:16.926 EAL: No shared files mode enabled, IPC is disabled 00:05:16.926 EAL: TSC frequency is ~2600000 KHz 00:05:16.926 EAL: Main lcore 0 is ready (tid=7ff83906ea40;cpuset=[0]) 00:05:16.926 EAL: Trying to obtain current memory policy. 00:05:16.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.926 EAL: Restoring previous memory policy: 0 00:05:16.926 EAL: request: mp_malloc_sync 00:05:16.926 EAL: No shared files mode enabled, IPC is disabled 00:05:16.926 EAL: Heap on socket 0 was expanded by 2MB 00:05:16.926 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:16.926 EAL: No shared files mode enabled, IPC is disabled 00:05:16.926 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:16.926 EAL: Mem event callback 'spdk:(nil)' registered 00:05:16.926 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:16.926 00:05:16.926 00:05:16.926 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.926 http://cunit.sourceforge.net/ 00:05:16.926 00:05:16.926 00:05:16.926 Suite: components_suite 00:05:17.188 Test: vtophys_malloc_test ...passed 00:05:17.188 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:17.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.188 EAL: Restoring previous memory policy: 4 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was expanded by 4MB 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was shrunk by 4MB 00:05:17.188 EAL: Trying to obtain current memory policy. 00:05:17.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.188 EAL: Restoring previous memory policy: 4 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was expanded by 6MB 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was shrunk by 6MB 00:05:17.188 EAL: Trying to obtain current memory policy. 00:05:17.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.188 EAL: Restoring previous memory policy: 4 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was expanded by 10MB 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was shrunk by 10MB 00:05:17.188 EAL: Trying to obtain current memory policy. 00:05:17.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.188 EAL: Restoring previous memory policy: 4 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was expanded by 18MB 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was shrunk by 18MB 00:05:17.188 EAL: Trying to obtain current memory policy. 00:05:17.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.188 EAL: Restoring previous memory policy: 4 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was expanded by 34MB 00:05:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.188 EAL: request: mp_malloc_sync 00:05:17.188 EAL: No shared files mode enabled, IPC is disabled 00:05:17.188 EAL: Heap on socket 0 was shrunk by 34MB 00:05:17.188 EAL: Trying to obtain current memory policy. 00:05:17.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.449 EAL: Restoring previous memory policy: 4 00:05:17.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.449 EAL: request: mp_malloc_sync 00:05:17.449 EAL: No shared files mode enabled, IPC is disabled 00:05:17.449 EAL: Heap on socket 0 was expanded by 66MB 00:05:17.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.449 EAL: request: mp_malloc_sync 00:05:17.449 EAL: No shared files mode enabled, IPC is disabled 00:05:17.449 EAL: Heap on socket 0 was shrunk by 66MB 00:05:17.449 EAL: Trying to obtain current memory policy. 00:05:17.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.449 EAL: Restoring previous memory policy: 4 00:05:17.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.449 EAL: request: mp_malloc_sync 00:05:17.449 EAL: No shared files mode enabled, IPC is disabled 00:05:17.449 EAL: Heap on socket 0 was expanded by 130MB 00:05:17.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.449 EAL: request: mp_malloc_sync 00:05:17.449 EAL: No shared files mode enabled, IPC is disabled 00:05:17.449 EAL: Heap on socket 0 was shrunk by 130MB 00:05:17.449 EAL: Trying to obtain current memory policy. 00:05:17.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.449 EAL: Restoring previous memory policy: 4 00:05:17.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.449 EAL: request: mp_malloc_sync 00:05:17.449 EAL: No shared files mode enabled, IPC is disabled 00:05:17.449 EAL: Heap on socket 0 was expanded by 258MB 00:05:17.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.449 EAL: request: mp_malloc_sync 00:05:17.449 EAL: No shared files mode enabled, IPC is disabled 00:05:17.449 EAL: Heap on socket 0 was shrunk by 258MB 00:05:17.449 EAL: Trying to obtain current memory policy. 00:05:17.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.449 EAL: Restoring previous memory policy: 4 00:05:17.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.449 EAL: request: mp_malloc_sync 00:05:17.449 EAL: No shared files mode enabled, IPC is disabled 00:05:17.449 EAL: Heap on socket 0 was expanded by 514MB 00:05:17.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.710 EAL: request: mp_malloc_sync 00:05:17.710 EAL: No shared files mode enabled, IPC is disabled 00:05:17.710 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.710 EAL: Trying to obtain current memory policy. 00:05:17.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.710 EAL: Restoring previous memory policy: 4 00:05:17.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.710 EAL: request: mp_malloc_sync 00:05:17.710 EAL: No shared files mode enabled, IPC is disabled 00:05:17.710 EAL: Heap on socket 0 was expanded by 1026MB 00:05:17.972 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.972 passed 00:05:17.972 00:05:17.972 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.972 suites 1 1 n/a 0 0 00:05:17.972 tests 2 2 2 0 0 00:05:17.972 asserts 5358 5358 5358 0 n/a 00:05:17.972 00:05:17.972 Elapsed time = 1.059 seconds 00:05:17.972 EAL: request: mp_malloc_sync 00:05:17.972 EAL: No shared files mode enabled, IPC is disabled 00:05:17.972 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:17.972 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.972 EAL: request: mp_malloc_sync 00:05:17.972 EAL: No shared files mode enabled, IPC is disabled 00:05:17.972 EAL: Heap on socket 0 was shrunk by 2MB 00:05:17.972 EAL: No shared files mode enabled, IPC is disabled 00:05:17.972 EAL: No shared files mode enabled, IPC is disabled 00:05:17.972 EAL: No shared files mode enabled, IPC is disabled 00:05:18.233 00:05:18.233 real 0m1.263s 00:05:18.233 user 0m0.542s 00:05:18.233 sys 0m0.587s 00:05:18.233 05:57:43 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.233 05:57:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:18.233 ************************************ 00:05:18.233 END TEST env_vtophys 00:05:18.233 ************************************ 00:05:18.233 05:57:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:18.233 05:57:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.233 05:57:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.233 05:57:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.233 ************************************ 00:05:18.233 START TEST env_pci 00:05:18.233 ************************************ 00:05:18.233 05:57:43 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:18.233 00:05:18.233 00:05:18.233 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.233 http://cunit.sourceforge.net/ 00:05:18.233 00:05:18.233 00:05:18.233 Suite: pci 00:05:18.233 Test: pci_hook ...[2024-10-01 05:57:43.653458] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69320 has claimed it 00:05:18.233 passed 00:05:18.233 00:05:18.233 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.234 suites 1 1 n/a 0 0 00:05:18.234 tests 1 1 1 0 0 00:05:18.234 asserts 25 25 25 0 n/a 00:05:18.234 00:05:18.234 Elapsed time = 0.005 seconds 00:05:18.234 EAL: Cannot find device (10000:00:01.0) 00:05:18.234 EAL: Failed to attach device on primary process 00:05:18.234 00:05:18.234 real 0m0.053s 00:05:18.234 user 0m0.027s 00:05:18.234 sys 0m0.026s 00:05:18.234 05:57:43 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.234 05:57:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:18.234 ************************************ 00:05:18.234 END TEST env_pci 00:05:18.234 ************************************ 00:05:18.234 05:57:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:18.234 05:57:43 env -- env/env.sh@15 -- # uname 00:05:18.234 05:57:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:18.234 05:57:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:18.234 05:57:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.234 05:57:43 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:18.234 05:57:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.234 05:57:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.234 ************************************ 00:05:18.234 START TEST env_dpdk_post_init 00:05:18.234 ************************************ 00:05:18.234 05:57:43 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.234 EAL: Detected CPU lcores: 10 00:05:18.234 EAL: Detected NUMA nodes: 1 00:05:18.234 EAL: Detected shared linkage of DPDK 00:05:18.234 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.234 EAL: Selected IOVA mode 'PA' 00:05:18.494 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.494 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:18.494 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:18.494 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:18.494 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:18.494 Starting DPDK initialization... 00:05:18.494 Starting SPDK post initialization... 00:05:18.494 SPDK NVMe probe 00:05:18.494 Attaching to 0000:00:10.0 00:05:18.494 Attaching to 0000:00:11.0 00:05:18.494 Attaching to 0000:00:12.0 00:05:18.494 Attaching to 0000:00:13.0 00:05:18.494 Attached to 0000:00:10.0 00:05:18.494 Attached to 0000:00:11.0 00:05:18.494 Attached to 0000:00:13.0 00:05:18.494 Attached to 0000:00:12.0 00:05:18.494 Cleaning up... 00:05:18.494 00:05:18.494 real 0m0.205s 00:05:18.494 user 0m0.046s 00:05:18.494 sys 0m0.060s 00:05:18.494 05:57:43 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.494 05:57:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.494 ************************************ 00:05:18.494 END TEST env_dpdk_post_init 00:05:18.494 ************************************ 00:05:18.494 05:57:43 env -- env/env.sh@26 -- # uname 00:05:18.494 05:57:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.494 05:57:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.494 05:57:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.494 05:57:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.494 05:57:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.494 ************************************ 00:05:18.494 START TEST env_mem_callbacks 00:05:18.494 ************************************ 00:05:18.494 05:57:43 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.494 EAL: Detected CPU lcores: 10 00:05:18.494 EAL: Detected NUMA nodes: 1 00:05:18.494 EAL: Detected shared linkage of DPDK 00:05:18.494 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.494 EAL: Selected IOVA mode 'PA' 00:05:18.754 00:05:18.754 00:05:18.754 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.754 http://cunit.sourceforge.net/ 00:05:18.754 00:05:18.754 00:05:18.754 Suite: memory 00:05:18.754 Test: test ... 00:05:18.754 register 0x200000200000 2097152 00:05:18.754 malloc 3145728 00:05:18.754 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.754 register 0x200000400000 4194304 00:05:18.754 buf 0x200000500000 len 3145728 PASSED 00:05:18.754 malloc 64 00:05:18.754 buf 0x2000004fff40 len 64 PASSED 00:05:18.754 malloc 4194304 00:05:18.754 register 0x200000800000 6291456 00:05:18.754 buf 0x200000a00000 len 4194304 PASSED 00:05:18.754 free 0x200000500000 3145728 00:05:18.754 free 0x2000004fff40 64 00:05:18.754 unregister 0x200000400000 4194304 PASSED 00:05:18.754 free 0x200000a00000 4194304 00:05:18.754 unregister 0x200000800000 6291456 PASSED 00:05:18.754 malloc 8388608 00:05:18.754 register 0x200000400000 10485760 00:05:18.754 buf 0x200000600000 len 8388608 PASSED 00:05:18.754 free 0x200000600000 8388608 00:05:18.754 unregister 0x200000400000 10485760 PASSED 00:05:18.754 passed 00:05:18.754 00:05:18.754 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.754 suites 1 1 n/a 0 0 00:05:18.754 tests 1 1 1 0 0 00:05:18.754 asserts 15 15 15 0 n/a 00:05:18.754 00:05:18.754 Elapsed time = 0.009 seconds 00:05:18.754 00:05:18.754 real 0m0.152s 00:05:18.754 user 0m0.020s 00:05:18.754 sys 0m0.030s 00:05:18.754 05:57:44 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.754 05:57:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:18.754 ************************************ 00:05:18.754 END TEST env_mem_callbacks 00:05:18.754 ************************************ 00:05:18.754 00:05:18.754 real 0m2.291s 00:05:18.754 user 0m1.031s 00:05:18.754 sys 0m0.924s 00:05:18.754 05:57:44 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.754 05:57:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.754 ************************************ 00:05:18.754 END TEST env 00:05:18.754 ************************************ 00:05:18.754 05:57:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:18.754 05:57:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.754 05:57:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.754 05:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:18.754 ************************************ 00:05:18.754 START TEST rpc 00:05:18.754 ************************************ 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:18.754 * Looking for test storage... 00:05:18.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.754 05:57:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.754 05:57:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.754 05:57:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.754 05:57:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.754 05:57:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.754 05:57:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.754 05:57:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.754 05:57:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.754 05:57:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.754 05:57:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.754 05:57:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.754 05:57:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.754 05:57:44 rpc -- scripts/common.sh@345 -- # : 1 00:05:18.754 05:57:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.754 05:57:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.754 05:57:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.754 05:57:44 rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.754 05:57:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.754 05:57:44 rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.754 05:57:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.754 05:57:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.754 05:57:44 rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.754 05:57:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.754 05:57:44 rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.754 05:57:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.754 05:57:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.754 05:57:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.754 05:57:44 rpc -- scripts/common.sh@368 -- # return 0 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.754 --rc genhtml_branch_coverage=1 00:05:18.754 --rc genhtml_function_coverage=1 00:05:18.754 --rc genhtml_legend=1 00:05:18.754 --rc geninfo_all_blocks=1 00:05:18.754 --rc geninfo_unexecuted_blocks=1 00:05:18.754 00:05:18.754 ' 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.754 --rc genhtml_branch_coverage=1 00:05:18.754 --rc genhtml_function_coverage=1 00:05:18.754 --rc genhtml_legend=1 00:05:18.754 --rc geninfo_all_blocks=1 00:05:18.754 --rc geninfo_unexecuted_blocks=1 00:05:18.754 00:05:18.754 ' 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.754 --rc genhtml_branch_coverage=1 00:05:18.754 --rc genhtml_function_coverage=1 00:05:18.754 --rc genhtml_legend=1 00:05:18.754 --rc geninfo_all_blocks=1 00:05:18.754 --rc geninfo_unexecuted_blocks=1 00:05:18.754 00:05:18.754 ' 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.754 --rc genhtml_branch_coverage=1 00:05:18.754 --rc genhtml_function_coverage=1 00:05:18.754 --rc genhtml_legend=1 00:05:18.754 --rc geninfo_all_blocks=1 00:05:18.754 --rc geninfo_unexecuted_blocks=1 00:05:18.754 00:05:18.754 ' 00:05:18.754 05:57:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69441 00:05:18.754 05:57:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.754 05:57:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:18.754 05:57:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69441 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@831 -- # '[' -z 69441 ']' 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.754 05:57:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.016 [2024-10-01 05:57:44.418200] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:19.016 [2024-10-01 05:57:44.418468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69441 ] 00:05:19.016 [2024-10-01 05:57:44.553518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.016 [2024-10-01 05:57:44.595321] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:19.016 [2024-10-01 05:57:44.595508] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69441' to capture a snapshot of events at runtime. 00:05:19.016 [2024-10-01 05:57:44.595587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:19.016 [2024-10-01 05:57:44.595619] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:19.016 [2024-10-01 05:57:44.595649] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69441 for offline analysis/debug. 00:05:19.016 [2024-10-01 05:57:44.595710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.959 05:57:45 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.959 05:57:45 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:19.959 05:57:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:19.959 05:57:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:19.959 05:57:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:19.959 05:57:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:19.959 05:57:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.959 05:57:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.959 05:57:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.959 ************************************ 00:05:19.959 START TEST rpc_integrity 00:05:19.959 ************************************ 00:05:19.959 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:19.959 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.959 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.959 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.959 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.959 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.960 { 00:05:19.960 "name": "Malloc0", 00:05:19.960 "aliases": [ 00:05:19.960 "60beda49-7aee-4783-b841-cf66b70b8c7e" 00:05:19.960 ], 00:05:19.960 "product_name": "Malloc disk", 00:05:19.960 "block_size": 512, 00:05:19.960 "num_blocks": 16384, 00:05:19.960 "uuid": "60beda49-7aee-4783-b841-cf66b70b8c7e", 00:05:19.960 "assigned_rate_limits": { 00:05:19.960 "rw_ios_per_sec": 0, 00:05:19.960 "rw_mbytes_per_sec": 0, 00:05:19.960 "r_mbytes_per_sec": 0, 00:05:19.960 "w_mbytes_per_sec": 0 00:05:19.960 }, 00:05:19.960 "claimed": false, 00:05:19.960 "zoned": false, 00:05:19.960 "supported_io_types": { 00:05:19.960 "read": true, 00:05:19.960 "write": true, 00:05:19.960 "unmap": true, 00:05:19.960 "flush": true, 00:05:19.960 "reset": true, 00:05:19.960 "nvme_admin": false, 00:05:19.960 "nvme_io": false, 00:05:19.960 "nvme_io_md": false, 00:05:19.960 "write_zeroes": true, 00:05:19.960 "zcopy": true, 00:05:19.960 "get_zone_info": false, 00:05:19.960 "zone_management": false, 00:05:19.960 "zone_append": false, 00:05:19.960 "compare": false, 00:05:19.960 "compare_and_write": false, 00:05:19.960 "abort": true, 00:05:19.960 "seek_hole": false, 00:05:19.960 "seek_data": false, 00:05:19.960 "copy": true, 00:05:19.960 "nvme_iov_md": false 00:05:19.960 }, 00:05:19.960 "memory_domains": [ 00:05:19.960 { 00:05:19.960 "dma_device_id": "system", 00:05:19.960 "dma_device_type": 1 00:05:19.960 }, 00:05:19.960 { 00:05:19.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.960 "dma_device_type": 2 00:05:19.960 } 00:05:19.960 ], 00:05:19.960 "driver_specific": {} 00:05:19.960 } 00:05:19.960 ]' 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.960 [2024-10-01 05:57:45.371271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:19.960 [2024-10-01 05:57:45.371445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.960 [2024-10-01 05:57:45.371509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:19.960 [2024-10-01 05:57:45.371581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.960 [2024-10-01 05:57:45.373990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.960 [2024-10-01 05:57:45.374103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.960 Passthru0 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.960 { 00:05:19.960 "name": "Malloc0", 00:05:19.960 "aliases": [ 00:05:19.960 "60beda49-7aee-4783-b841-cf66b70b8c7e" 00:05:19.960 ], 00:05:19.960 "product_name": "Malloc disk", 00:05:19.960 "block_size": 512, 00:05:19.960 "num_blocks": 16384, 00:05:19.960 "uuid": "60beda49-7aee-4783-b841-cf66b70b8c7e", 00:05:19.960 "assigned_rate_limits": { 00:05:19.960 "rw_ios_per_sec": 0, 00:05:19.960 "rw_mbytes_per_sec": 0, 00:05:19.960 "r_mbytes_per_sec": 0, 00:05:19.960 "w_mbytes_per_sec": 0 00:05:19.960 }, 00:05:19.960 "claimed": true, 00:05:19.960 "claim_type": "exclusive_write", 00:05:19.960 "zoned": false, 00:05:19.960 "supported_io_types": { 00:05:19.960 "read": true, 00:05:19.960 "write": true, 00:05:19.960 "unmap": true, 00:05:19.960 "flush": true, 00:05:19.960 "reset": true, 00:05:19.960 "nvme_admin": false, 00:05:19.960 "nvme_io": false, 00:05:19.960 "nvme_io_md": false, 00:05:19.960 "write_zeroes": true, 00:05:19.960 "zcopy": true, 00:05:19.960 "get_zone_info": false, 00:05:19.960 "zone_management": false, 00:05:19.960 "zone_append": false, 00:05:19.960 "compare": false, 00:05:19.960 "compare_and_write": false, 00:05:19.960 "abort": true, 00:05:19.960 "seek_hole": false, 00:05:19.960 "seek_data": false, 00:05:19.960 "copy": true, 00:05:19.960 "nvme_iov_md": false 00:05:19.960 }, 00:05:19.960 "memory_domains": [ 00:05:19.960 { 00:05:19.960 "dma_device_id": "system", 00:05:19.960 "dma_device_type": 1 00:05:19.960 }, 00:05:19.960 { 00:05:19.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.960 "dma_device_type": 2 00:05:19.960 } 00:05:19.960 ], 00:05:19.960 "driver_specific": {} 00:05:19.960 }, 00:05:19.960 { 00:05:19.960 "name": "Passthru0", 00:05:19.960 "aliases": [ 00:05:19.960 "1c124bfe-bef0-5c0e-84aa-7da77552c4db" 00:05:19.960 ], 00:05:19.960 "product_name": "passthru", 00:05:19.960 "block_size": 512, 00:05:19.960 "num_blocks": 16384, 00:05:19.960 "uuid": "1c124bfe-bef0-5c0e-84aa-7da77552c4db", 00:05:19.960 "assigned_rate_limits": { 00:05:19.960 "rw_ios_per_sec": 0, 00:05:19.960 "rw_mbytes_per_sec": 0, 00:05:19.960 "r_mbytes_per_sec": 0, 00:05:19.960 "w_mbytes_per_sec": 0 00:05:19.960 }, 00:05:19.960 "claimed": false, 00:05:19.960 "zoned": false, 00:05:19.960 "supported_io_types": { 00:05:19.960 "read": true, 00:05:19.960 "write": true, 00:05:19.960 "unmap": true, 00:05:19.960 "flush": true, 00:05:19.960 "reset": true, 00:05:19.960 "nvme_admin": false, 00:05:19.960 "nvme_io": false, 00:05:19.960 "nvme_io_md": false, 00:05:19.960 "write_zeroes": true, 00:05:19.960 "zcopy": true, 00:05:19.960 "get_zone_info": false, 00:05:19.960 "zone_management": false, 00:05:19.960 "zone_append": false, 00:05:19.960 "compare": false, 00:05:19.960 "compare_and_write": false, 00:05:19.960 "abort": true, 00:05:19.960 "seek_hole": false, 00:05:19.960 "seek_data": false, 00:05:19.960 "copy": true, 00:05:19.960 "nvme_iov_md": false 00:05:19.960 }, 00:05:19.960 "memory_domains": [ 00:05:19.960 { 00:05:19.960 "dma_device_id": "system", 00:05:19.960 "dma_device_type": 1 00:05:19.960 }, 00:05:19.960 { 00:05:19.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.960 "dma_device_type": 2 00:05:19.960 } 00:05:19.960 ], 00:05:19.960 "driver_specific": { 00:05:19.960 "passthru": { 00:05:19.960 "name": "Passthru0", 00:05:19.960 "base_bdev_name": "Malloc0" 00:05:19.960 } 00:05:19.960 } 00:05:19.960 } 00:05:19.960 ]' 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.960 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:19.960 ************************************ 00:05:19.960 END TEST rpc_integrity 00:05:19.960 ************************************ 00:05:19.960 05:57:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.960 00:05:19.960 real 0m0.237s 00:05:19.960 user 0m0.133s 00:05:19.960 sys 0m0.034s 00:05:19.961 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.961 05:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.961 05:57:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:19.961 05:57:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.961 05:57:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.961 05:57:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.961 ************************************ 00:05:19.961 START TEST rpc_plugins 00:05:19.961 ************************************ 00:05:19.961 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:19.961 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:19.961 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.961 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.961 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.961 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:19.961 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:19.961 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.961 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.223 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.223 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:20.223 { 00:05:20.223 "name": "Malloc1", 00:05:20.223 "aliases": [ 00:05:20.223 "a1fb2e9b-86cb-4585-9dd1-a1f0941007ca" 00:05:20.223 ], 00:05:20.223 "product_name": "Malloc disk", 00:05:20.223 "block_size": 4096, 00:05:20.223 "num_blocks": 256, 00:05:20.223 "uuid": "a1fb2e9b-86cb-4585-9dd1-a1f0941007ca", 00:05:20.223 "assigned_rate_limits": { 00:05:20.224 "rw_ios_per_sec": 0, 00:05:20.224 "rw_mbytes_per_sec": 0, 00:05:20.224 "r_mbytes_per_sec": 0, 00:05:20.224 "w_mbytes_per_sec": 0 00:05:20.224 }, 00:05:20.224 "claimed": false, 00:05:20.224 "zoned": false, 00:05:20.224 "supported_io_types": { 00:05:20.224 "read": true, 00:05:20.224 "write": true, 00:05:20.224 "unmap": true, 00:05:20.224 "flush": true, 00:05:20.224 "reset": true, 00:05:20.224 "nvme_admin": false, 00:05:20.224 "nvme_io": false, 00:05:20.224 "nvme_io_md": false, 00:05:20.224 "write_zeroes": true, 00:05:20.224 "zcopy": true, 00:05:20.224 "get_zone_info": false, 00:05:20.224 "zone_management": false, 00:05:20.224 "zone_append": false, 00:05:20.224 "compare": false, 00:05:20.224 "compare_and_write": false, 00:05:20.224 "abort": true, 00:05:20.224 "seek_hole": false, 00:05:20.224 "seek_data": false, 00:05:20.224 "copy": true, 00:05:20.224 "nvme_iov_md": false 00:05:20.224 }, 00:05:20.224 "memory_domains": [ 00:05:20.224 { 00:05:20.224 "dma_device_id": "system", 00:05:20.224 "dma_device_type": 1 00:05:20.224 }, 00:05:20.224 { 00:05:20.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.224 "dma_device_type": 2 00:05:20.224 } 00:05:20.224 ], 00:05:20.224 "driver_specific": {} 00:05:20.224 } 00:05:20.224 ]' 00:05:20.224 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:20.224 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:20.224 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:20.224 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.224 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.224 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.224 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:20.224 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.224 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.224 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.224 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:20.224 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:20.224 ************************************ 00:05:20.224 END TEST rpc_plugins 00:05:20.224 ************************************ 00:05:20.224 05:57:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:20.224 00:05:20.224 real 0m0.117s 00:05:20.224 user 0m0.061s 00:05:20.224 sys 0m0.017s 00:05:20.224 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.224 05:57:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.224 05:57:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.224 05:57:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.224 05:57:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.224 05:57:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.224 ************************************ 00:05:20.224 START TEST rpc_trace_cmd_test 00:05:20.224 ************************************ 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:20.224 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69441", 00:05:20.224 "tpoint_group_mask": "0x8", 00:05:20.224 "iscsi_conn": { 00:05:20.224 "mask": "0x2", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "scsi": { 00:05:20.224 "mask": "0x4", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "bdev": { 00:05:20.224 "mask": "0x8", 00:05:20.224 "tpoint_mask": "0xffffffffffffffff" 00:05:20.224 }, 00:05:20.224 "nvmf_rdma": { 00:05:20.224 "mask": "0x10", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "nvmf_tcp": { 00:05:20.224 "mask": "0x20", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "ftl": { 00:05:20.224 "mask": "0x40", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "blobfs": { 00:05:20.224 "mask": "0x80", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "dsa": { 00:05:20.224 "mask": "0x200", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "thread": { 00:05:20.224 "mask": "0x400", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "nvme_pcie": { 00:05:20.224 "mask": "0x800", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "iaa": { 00:05:20.224 "mask": "0x1000", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "nvme_tcp": { 00:05:20.224 "mask": "0x2000", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "bdev_nvme": { 00:05:20.224 "mask": "0x4000", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "sock": { 00:05:20.224 "mask": "0x8000", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "blob": { 00:05:20.224 "mask": "0x10000", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 }, 00:05:20.224 "bdev_raid": { 00:05:20.224 "mask": "0x20000", 00:05:20.224 "tpoint_mask": "0x0" 00:05:20.224 } 00:05:20.224 }' 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.224 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:20.487 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:20.487 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:20.487 ************************************ 00:05:20.487 END TEST rpc_trace_cmd_test 00:05:20.487 ************************************ 00:05:20.487 05:57:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:20.487 00:05:20.487 real 0m0.174s 00:05:20.487 user 0m0.138s 00:05:20.487 sys 0m0.026s 00:05:20.487 05:57:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.487 05:57:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.487 05:57:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:20.487 05:57:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:20.487 05:57:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:20.487 05:57:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.487 05:57:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.487 05:57:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.487 ************************************ 00:05:20.487 START TEST rpc_daemon_integrity 00:05:20.487 ************************************ 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.487 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.487 { 00:05:20.487 "name": "Malloc2", 00:05:20.487 "aliases": [ 00:05:20.487 "418a3562-229c-4b6c-8f89-6e0ca0af48f9" 00:05:20.487 ], 00:05:20.487 "product_name": "Malloc disk", 00:05:20.487 "block_size": 512, 00:05:20.487 "num_blocks": 16384, 00:05:20.487 "uuid": "418a3562-229c-4b6c-8f89-6e0ca0af48f9", 00:05:20.487 "assigned_rate_limits": { 00:05:20.487 "rw_ios_per_sec": 0, 00:05:20.487 "rw_mbytes_per_sec": 0, 00:05:20.487 "r_mbytes_per_sec": 0, 00:05:20.488 "w_mbytes_per_sec": 0 00:05:20.488 }, 00:05:20.488 "claimed": false, 00:05:20.488 "zoned": false, 00:05:20.488 "supported_io_types": { 00:05:20.488 "read": true, 00:05:20.488 "write": true, 00:05:20.488 "unmap": true, 00:05:20.488 "flush": true, 00:05:20.488 "reset": true, 00:05:20.488 "nvme_admin": false, 00:05:20.488 "nvme_io": false, 00:05:20.488 "nvme_io_md": false, 00:05:20.488 "write_zeroes": true, 00:05:20.488 "zcopy": true, 00:05:20.488 "get_zone_info": false, 00:05:20.488 "zone_management": false, 00:05:20.488 "zone_append": false, 00:05:20.488 "compare": false, 00:05:20.488 "compare_and_write": false, 00:05:20.488 "abort": true, 00:05:20.488 "seek_hole": false, 00:05:20.488 "seek_data": false, 00:05:20.488 "copy": true, 00:05:20.488 "nvme_iov_md": false 00:05:20.488 }, 00:05:20.488 "memory_domains": [ 00:05:20.488 { 00:05:20.488 "dma_device_id": "system", 00:05:20.488 "dma_device_type": 1 00:05:20.488 }, 00:05:20.488 { 00:05:20.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.488 "dma_device_type": 2 00:05:20.488 } 00:05:20.488 ], 00:05:20.488 "driver_specific": {} 00:05:20.488 } 00:05:20.488 ]' 00:05:20.488 05:57:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 [2024-10-01 05:57:46.020025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:20.488 [2024-10-01 05:57:46.020086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.488 [2024-10-01 05:57:46.020109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:20.488 [2024-10-01 05:57:46.020119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.488 [2024-10-01 05:57:46.022501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.488 [2024-10-01 05:57:46.022536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.488 Passthru0 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.488 { 00:05:20.488 "name": "Malloc2", 00:05:20.488 "aliases": [ 00:05:20.488 "418a3562-229c-4b6c-8f89-6e0ca0af48f9" 00:05:20.488 ], 00:05:20.488 "product_name": "Malloc disk", 00:05:20.488 "block_size": 512, 00:05:20.488 "num_blocks": 16384, 00:05:20.488 "uuid": "418a3562-229c-4b6c-8f89-6e0ca0af48f9", 00:05:20.488 "assigned_rate_limits": { 00:05:20.488 "rw_ios_per_sec": 0, 00:05:20.488 "rw_mbytes_per_sec": 0, 00:05:20.488 "r_mbytes_per_sec": 0, 00:05:20.488 "w_mbytes_per_sec": 0 00:05:20.488 }, 00:05:20.488 "claimed": true, 00:05:20.488 "claim_type": "exclusive_write", 00:05:20.488 "zoned": false, 00:05:20.488 "supported_io_types": { 00:05:20.488 "read": true, 00:05:20.488 "write": true, 00:05:20.488 "unmap": true, 00:05:20.488 "flush": true, 00:05:20.488 "reset": true, 00:05:20.488 "nvme_admin": false, 00:05:20.488 "nvme_io": false, 00:05:20.488 "nvme_io_md": false, 00:05:20.488 "write_zeroes": true, 00:05:20.488 "zcopy": true, 00:05:20.488 "get_zone_info": false, 00:05:20.488 "zone_management": false, 00:05:20.488 "zone_append": false, 00:05:20.488 "compare": false, 00:05:20.488 "compare_and_write": false, 00:05:20.488 "abort": true, 00:05:20.488 "seek_hole": false, 00:05:20.488 "seek_data": false, 00:05:20.488 "copy": true, 00:05:20.488 "nvme_iov_md": false 00:05:20.488 }, 00:05:20.488 "memory_domains": [ 00:05:20.488 { 00:05:20.488 "dma_device_id": "system", 00:05:20.488 "dma_device_type": 1 00:05:20.488 }, 00:05:20.488 { 00:05:20.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.488 "dma_device_type": 2 00:05:20.488 } 00:05:20.488 ], 00:05:20.488 "driver_specific": {} 00:05:20.488 }, 00:05:20.488 { 00:05:20.488 "name": "Passthru0", 00:05:20.488 "aliases": [ 00:05:20.488 "7e1540c5-d986-53f7-b227-967c08fdabca" 00:05:20.488 ], 00:05:20.488 "product_name": "passthru", 00:05:20.488 "block_size": 512, 00:05:20.488 "num_blocks": 16384, 00:05:20.488 "uuid": "7e1540c5-d986-53f7-b227-967c08fdabca", 00:05:20.488 "assigned_rate_limits": { 00:05:20.488 "rw_ios_per_sec": 0, 00:05:20.488 "rw_mbytes_per_sec": 0, 00:05:20.488 "r_mbytes_per_sec": 0, 00:05:20.488 "w_mbytes_per_sec": 0 00:05:20.488 }, 00:05:20.488 "claimed": false, 00:05:20.488 "zoned": false, 00:05:20.488 "supported_io_types": { 00:05:20.488 "read": true, 00:05:20.488 "write": true, 00:05:20.488 "unmap": true, 00:05:20.488 "flush": true, 00:05:20.488 "reset": true, 00:05:20.488 "nvme_admin": false, 00:05:20.488 "nvme_io": false, 00:05:20.488 "nvme_io_md": false, 00:05:20.488 "write_zeroes": true, 00:05:20.488 "zcopy": true, 00:05:20.488 "get_zone_info": false, 00:05:20.488 "zone_management": false, 00:05:20.488 "zone_append": false, 00:05:20.488 "compare": false, 00:05:20.488 "compare_and_write": false, 00:05:20.488 "abort": true, 00:05:20.488 "seek_hole": false, 00:05:20.488 "seek_data": false, 00:05:20.488 "copy": true, 00:05:20.488 "nvme_iov_md": false 00:05:20.488 }, 00:05:20.488 "memory_domains": [ 00:05:20.488 { 00:05:20.488 "dma_device_id": "system", 00:05:20.488 "dma_device_type": 1 00:05:20.488 }, 00:05:20.488 { 00:05:20.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.488 "dma_device_type": 2 00:05:20.488 } 00:05:20.488 ], 00:05:20.488 "driver_specific": { 00:05:20.488 "passthru": { 00:05:20.488 "name": "Passthru0", 00:05:20.488 "base_bdev_name": "Malloc2" 00:05:20.488 } 00:05:20.488 } 00:05:20.488 } 00:05:20.488 ]' 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.488 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.749 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.749 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.749 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.749 05:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.749 00:05:20.749 real 0m0.220s 00:05:20.749 user 0m0.123s 00:05:20.749 sys 0m0.033s 00:05:20.749 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.749 05:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.749 ************************************ 00:05:20.749 END TEST rpc_daemon_integrity 00:05:20.749 ************************************ 00:05:20.749 05:57:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:20.749 05:57:46 rpc -- rpc/rpc.sh@84 -- # killprocess 69441 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@950 -- # '[' -z 69441 ']' 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@954 -- # kill -0 69441 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69441 00:05:20.749 killing process with pid 69441 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69441' 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@969 -- # kill 69441 00:05:20.749 05:57:46 rpc -- common/autotest_common.sh@974 -- # wait 69441 00:05:21.009 00:05:21.009 real 0m2.353s 00:05:21.009 user 0m2.757s 00:05:21.009 sys 0m0.614s 00:05:21.009 05:57:46 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.009 ************************************ 00:05:21.009 05:57:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.009 END TEST rpc 00:05:21.009 ************************************ 00:05:21.009 05:57:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:21.009 05:57:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.009 05:57:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.009 05:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.009 ************************************ 00:05:21.009 START TEST skip_rpc 00:05:21.009 ************************************ 00:05:21.009 05:57:46 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:21.269 * Looking for test storage... 00:05:21.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.269 05:57:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.269 --rc genhtml_branch_coverage=1 00:05:21.269 --rc genhtml_function_coverage=1 00:05:21.269 --rc genhtml_legend=1 00:05:21.269 --rc geninfo_all_blocks=1 00:05:21.269 --rc geninfo_unexecuted_blocks=1 00:05:21.269 00:05:21.269 ' 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.269 --rc genhtml_branch_coverage=1 00:05:21.269 --rc genhtml_function_coverage=1 00:05:21.269 --rc genhtml_legend=1 00:05:21.269 --rc geninfo_all_blocks=1 00:05:21.269 --rc geninfo_unexecuted_blocks=1 00:05:21.269 00:05:21.269 ' 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.269 --rc genhtml_branch_coverage=1 00:05:21.269 --rc genhtml_function_coverage=1 00:05:21.269 --rc genhtml_legend=1 00:05:21.269 --rc geninfo_all_blocks=1 00:05:21.269 --rc geninfo_unexecuted_blocks=1 00:05:21.269 00:05:21.269 ' 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.269 --rc genhtml_branch_coverage=1 00:05:21.269 --rc genhtml_function_coverage=1 00:05:21.269 --rc genhtml_legend=1 00:05:21.269 --rc geninfo_all_blocks=1 00:05:21.269 --rc geninfo_unexecuted_blocks=1 00:05:21.269 00:05:21.269 ' 00:05:21.269 05:57:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.269 05:57:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:21.269 05:57:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.269 05:57:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.269 ************************************ 00:05:21.269 START TEST skip_rpc 00:05:21.269 ************************************ 00:05:21.269 05:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:21.269 05:57:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69643 00:05:21.269 05:57:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.269 05:57:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:21.269 05:57:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:21.269 [2024-10-01 05:57:46.838667] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:21.270 [2024-10-01 05:57:46.838959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69643 ] 00:05:21.530 [2024-10-01 05:57:46.971113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.530 [2024-10-01 05:57:47.014584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69643 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69643 ']' 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69643 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69643 00:05:26.859 killing process with pid 69643 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69643' 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69643 00:05:26.859 05:57:51 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69643 00:05:26.859 00:05:26.859 real 0m5.423s 00:05:26.859 user 0m5.036s 00:05:26.859 sys 0m0.280s 00:05:26.859 05:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.859 ************************************ 00:05:26.859 END TEST skip_rpc 00:05:26.859 ************************************ 00:05:26.859 05:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.859 05:57:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:26.859 05:57:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.859 05:57:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.859 05:57:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.859 ************************************ 00:05:26.859 START TEST skip_rpc_with_json 00:05:26.859 ************************************ 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69730 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69730 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69730 ']' 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.859 05:57:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.859 [2024-10-01 05:57:52.319063] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:26.859 [2024-10-01 05:57:52.319348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69730 ] 00:05:26.859 [2024-10-01 05:57:52.447584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.117 [2024-10-01 05:57:52.489905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.684 [2024-10-01 05:57:53.131335] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:27.684 request: 00:05:27.684 { 00:05:27.684 "trtype": "tcp", 00:05:27.684 "method": "nvmf_get_transports", 00:05:27.684 "req_id": 1 00:05:27.684 } 00:05:27.684 Got JSON-RPC error response 00:05:27.684 response: 00:05:27.684 { 00:05:27.684 "code": -19, 00:05:27.684 "message": "No such device" 00:05:27.684 } 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.684 [2024-10-01 05:57:53.139426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.684 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.684 { 00:05:27.684 "subsystems": [ 00:05:27.684 { 00:05:27.684 "subsystem": "fsdev", 00:05:27.684 "config": [ 00:05:27.684 { 00:05:27.684 "method": "fsdev_set_opts", 00:05:27.684 "params": { 00:05:27.684 "fsdev_io_pool_size": 65535, 00:05:27.684 "fsdev_io_cache_size": 256 00:05:27.684 } 00:05:27.684 } 00:05:27.684 ] 00:05:27.684 }, 00:05:27.684 { 00:05:27.684 "subsystem": "keyring", 00:05:27.684 "config": [] 00:05:27.684 }, 00:05:27.684 { 00:05:27.684 "subsystem": "iobuf", 00:05:27.684 "config": [ 00:05:27.684 { 00:05:27.684 "method": "iobuf_set_options", 00:05:27.684 "params": { 00:05:27.684 "small_pool_count": 8192, 00:05:27.684 "large_pool_count": 1024, 00:05:27.684 "small_bufsize": 8192, 00:05:27.684 "large_bufsize": 135168 00:05:27.684 } 00:05:27.684 } 00:05:27.684 ] 00:05:27.684 }, 00:05:27.684 { 00:05:27.684 "subsystem": "sock", 00:05:27.684 "config": [ 00:05:27.684 { 00:05:27.684 "method": "sock_set_default_impl", 00:05:27.684 "params": { 00:05:27.684 "impl_name": "posix" 00:05:27.684 } 00:05:27.684 }, 00:05:27.684 { 00:05:27.684 "method": "sock_impl_set_options", 00:05:27.684 "params": { 00:05:27.684 "impl_name": "ssl", 00:05:27.684 "recv_buf_size": 4096, 00:05:27.684 "send_buf_size": 4096, 00:05:27.684 "enable_recv_pipe": true, 00:05:27.684 "enable_quickack": false, 00:05:27.684 "enable_placement_id": 0, 00:05:27.684 "enable_zerocopy_send_server": true, 00:05:27.684 "enable_zerocopy_send_client": false, 00:05:27.684 "zerocopy_threshold": 0, 00:05:27.684 "tls_version": 0, 00:05:27.684 "enable_ktls": false 00:05:27.684 } 00:05:27.684 }, 00:05:27.684 { 00:05:27.684 "method": "sock_impl_set_options", 00:05:27.684 "params": { 00:05:27.685 "impl_name": "posix", 00:05:27.685 "recv_buf_size": 2097152, 00:05:27.685 "send_buf_size": 2097152, 00:05:27.685 "enable_recv_pipe": true, 00:05:27.685 "enable_quickack": false, 00:05:27.685 "enable_placement_id": 0, 00:05:27.685 "enable_zerocopy_send_server": true, 00:05:27.685 "enable_zerocopy_send_client": false, 00:05:27.685 "zerocopy_threshold": 0, 00:05:27.685 "tls_version": 0, 00:05:27.685 "enable_ktls": false 00:05:27.685 } 00:05:27.685 } 00:05:27.685 ] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "vmd", 00:05:27.685 "config": [] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "accel", 00:05:27.685 "config": [ 00:05:27.685 { 00:05:27.685 "method": "accel_set_options", 00:05:27.685 "params": { 00:05:27.685 "small_cache_size": 128, 00:05:27.685 "large_cache_size": 16, 00:05:27.685 "task_count": 2048, 00:05:27.685 "sequence_count": 2048, 00:05:27.685 "buf_count": 2048 00:05:27.685 } 00:05:27.685 } 00:05:27.685 ] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "bdev", 00:05:27.685 "config": [ 00:05:27.685 { 00:05:27.685 "method": "bdev_set_options", 00:05:27.685 "params": { 00:05:27.685 "bdev_io_pool_size": 65535, 00:05:27.685 "bdev_io_cache_size": 256, 00:05:27.685 "bdev_auto_examine": true, 00:05:27.685 "iobuf_small_cache_size": 128, 00:05:27.685 "iobuf_large_cache_size": 16 00:05:27.685 } 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "method": "bdev_raid_set_options", 00:05:27.685 "params": { 00:05:27.685 "process_window_size_kb": 1024, 00:05:27.685 "process_max_bandwidth_mb_sec": 0 00:05:27.685 } 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "method": "bdev_iscsi_set_options", 00:05:27.685 "params": { 00:05:27.685 "timeout_sec": 30 00:05:27.685 } 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "method": "bdev_nvme_set_options", 00:05:27.685 "params": { 00:05:27.685 "action_on_timeout": "none", 00:05:27.685 "timeout_us": 0, 00:05:27.685 "timeout_admin_us": 0, 00:05:27.685 "keep_alive_timeout_ms": 10000, 00:05:27.685 "arbitration_burst": 0, 00:05:27.685 "low_priority_weight": 0, 00:05:27.685 "medium_priority_weight": 0, 00:05:27.685 "high_priority_weight": 0, 00:05:27.685 "nvme_adminq_poll_period_us": 10000, 00:05:27.685 "nvme_ioq_poll_period_us": 0, 00:05:27.685 "io_queue_requests": 0, 00:05:27.685 "delay_cmd_submit": true, 00:05:27.685 "transport_retry_count": 4, 00:05:27.685 "bdev_retry_count": 3, 00:05:27.685 "transport_ack_timeout": 0, 00:05:27.685 "ctrlr_loss_timeout_sec": 0, 00:05:27.685 "reconnect_delay_sec": 0, 00:05:27.685 "fast_io_fail_timeout_sec": 0, 00:05:27.685 "disable_auto_failback": false, 00:05:27.685 "generate_uuids": false, 00:05:27.685 "transport_tos": 0, 00:05:27.685 "nvme_error_stat": false, 00:05:27.685 "rdma_srq_size": 0, 00:05:27.685 "io_path_stat": false, 00:05:27.685 "allow_accel_sequence": false, 00:05:27.685 "rdma_max_cq_size": 0, 00:05:27.685 "rdma_cm_event_timeout_ms": 0, 00:05:27.685 "dhchap_digests": [ 00:05:27.685 "sha256", 00:05:27.685 "sha384", 00:05:27.685 "sha512" 00:05:27.685 ], 00:05:27.685 "dhchap_dhgroups": [ 00:05:27.685 "null", 00:05:27.685 "ffdhe2048", 00:05:27.685 "ffdhe3072", 00:05:27.685 "ffdhe4096", 00:05:27.685 "ffdhe6144", 00:05:27.685 "ffdhe8192" 00:05:27.685 ] 00:05:27.685 } 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "method": "bdev_nvme_set_hotplug", 00:05:27.685 "params": { 00:05:27.685 "period_us": 100000, 00:05:27.685 "enable": false 00:05:27.685 } 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "method": "bdev_wait_for_examine" 00:05:27.685 } 00:05:27.685 ] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "scsi", 00:05:27.685 "config": null 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "scheduler", 00:05:27.685 "config": [ 00:05:27.685 { 00:05:27.685 "method": "framework_set_scheduler", 00:05:27.685 "params": { 00:05:27.685 "name": "static" 00:05:27.685 } 00:05:27.685 } 00:05:27.685 ] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "vhost_scsi", 00:05:27.685 "config": [] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "vhost_blk", 00:05:27.685 "config": [] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "ublk", 00:05:27.685 "config": [] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "nbd", 00:05:27.685 "config": [] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "nvmf", 00:05:27.685 "config": [ 00:05:27.685 { 00:05:27.685 "method": "nvmf_set_config", 00:05:27.685 "params": { 00:05:27.685 "discovery_filter": "match_any", 00:05:27.685 "admin_cmd_passthru": { 00:05:27.685 "identify_ctrlr": false 00:05:27.685 }, 00:05:27.685 "dhchap_digests": [ 00:05:27.685 "sha256", 00:05:27.685 "sha384", 00:05:27.685 "sha512" 00:05:27.685 ], 00:05:27.685 "dhchap_dhgroups": [ 00:05:27.685 "null", 00:05:27.685 "ffdhe2048", 00:05:27.685 "ffdhe3072", 00:05:27.685 "ffdhe4096", 00:05:27.685 "ffdhe6144", 00:05:27.685 "ffdhe8192" 00:05:27.685 ] 00:05:27.685 } 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "method": "nvmf_set_max_subsystems", 00:05:27.685 "params": { 00:05:27.685 "max_subsystems": 1024 00:05:27.685 } 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "method": "nvmf_set_crdt", 00:05:27.685 "params": { 00:05:27.685 "crdt1": 0, 00:05:27.685 "crdt2": 0, 00:05:27.685 "crdt3": 0 00:05:27.685 } 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "method": "nvmf_create_transport", 00:05:27.685 "params": { 00:05:27.685 "trtype": "TCP", 00:05:27.685 "max_queue_depth": 128, 00:05:27.685 "max_io_qpairs_per_ctrlr": 127, 00:05:27.685 "in_capsule_data_size": 4096, 00:05:27.685 "max_io_size": 131072, 00:05:27.685 "io_unit_size": 131072, 00:05:27.685 "max_aq_depth": 128, 00:05:27.685 "num_shared_buffers": 511, 00:05:27.685 "buf_cache_size": 4294967295, 00:05:27.685 "dif_insert_or_strip": false, 00:05:27.685 "zcopy": false, 00:05:27.685 "c2h_success": true, 00:05:27.685 "sock_priority": 0, 00:05:27.685 "abort_timeout_sec": 1, 00:05:27.685 "ack_timeout": 0, 00:05:27.685 "data_wr_pool_size": 0 00:05:27.685 } 00:05:27.685 } 00:05:27.685 ] 00:05:27.685 }, 00:05:27.685 { 00:05:27.685 "subsystem": "iscsi", 00:05:27.685 "config": [ 00:05:27.685 { 00:05:27.685 "method": "iscsi_set_options", 00:05:27.686 "params": { 00:05:27.686 "node_base": "iqn.2016-06.io.spdk", 00:05:27.686 "max_sessions": 128, 00:05:27.686 "max_connections_per_session": 2, 00:05:27.686 "max_queue_depth": 64, 00:05:27.686 "default_time2wait": 2, 00:05:27.686 "default_time2retain": 20, 00:05:27.686 "first_burst_length": 8192, 00:05:27.686 "immediate_data": true, 00:05:27.686 "allow_duplicated_isid": false, 00:05:27.686 "error_recovery_level": 0, 00:05:27.686 "nop_timeout": 60, 00:05:27.686 "nop_in_interval": 30, 00:05:27.686 "disable_chap": false, 00:05:27.686 "require_chap": false, 00:05:27.686 "mutual_chap": false, 00:05:27.686 "chap_group": 0, 00:05:27.686 "max_large_datain_per_connection": 64, 00:05:27.686 "max_r2t_per_connection": 4, 00:05:27.686 "pdu_pool_size": 36864, 00:05:27.686 "immediate_data_pool_size": 16384, 00:05:27.686 "data_out_pool_size": 2048 00:05:27.686 } 00:05:27.686 } 00:05:27.686 ] 00:05:27.686 } 00:05:27.686 ] 00:05:27.686 } 00:05:27.686 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:27.686 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69730 00:05:27.686 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69730 ']' 00:05:27.686 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69730 00:05:27.686 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:27.686 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.944 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69730 00:05:27.944 killing process with pid 69730 00:05:27.944 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.944 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.944 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69730' 00:05:27.944 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69730 00:05:27.944 05:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69730 00:05:28.204 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69753 00:05:28.204 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.205 05:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69753 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69753 ']' 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69753 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69753 00:05:33.591 killing process with pid 69753 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69753' 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69753 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69753 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.591 00:05:33.591 real 0m6.737s 00:05:33.591 user 0m6.312s 00:05:33.591 sys 0m0.620s 00:05:33.591 ************************************ 00:05:33.591 END TEST skip_rpc_with_json 00:05:33.591 ************************************ 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.591 05:57:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.591 05:57:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:33.591 05:57:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.591 05:57:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.591 05:57:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.591 ************************************ 00:05:33.591 START TEST skip_rpc_with_delay 00:05:33.591 ************************************ 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.591 [2024-10-01 05:57:59.119691] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:33.591 [2024-10-01 05:57:59.119829] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:33.591 ************************************ 00:05:33.591 END TEST skip_rpc_with_delay 00:05:33.591 ************************************ 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.591 00:05:33.591 real 0m0.130s 00:05:33.591 user 0m0.075s 00:05:33.591 sys 0m0.054s 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.591 05:57:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:33.849 05:57:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:33.849 05:57:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:33.849 05:57:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:33.849 05:57:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.849 05:57:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.849 05:57:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.849 ************************************ 00:05:33.849 START TEST exit_on_failed_rpc_init 00:05:33.849 ************************************ 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69865 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69865 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69865 ']' 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.849 05:57:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.849 [2024-10-01 05:57:59.322500] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:33.850 [2024-10-01 05:57:59.322621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69865 ] 00:05:33.850 [2024-10-01 05:57:59.453605] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.107 [2024-10-01 05:57:59.495694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.672 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.672 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:34.672 05:58:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.672 05:58:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.672 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:34.672 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.672 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.673 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.673 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.673 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.673 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.673 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.673 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.673 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:34.673 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.673 [2024-10-01 05:58:00.226552] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:34.673 [2024-10-01 05:58:00.226674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69883 ] 00:05:34.931 [2024-10-01 05:58:00.360268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.931 [2024-10-01 05:58:00.392896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.931 [2024-10-01 05:58:00.392978] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:34.931 [2024-10-01 05:58:00.392993] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:34.931 [2024-10-01 05:58:00.393004] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69865 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69865 ']' 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69865 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69865 00:05:34.931 killing process with pid 69865 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69865' 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69865 00:05:34.931 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69865 00:05:35.502 00:05:35.502 real 0m1.566s 00:05:35.502 user 0m1.678s 00:05:35.502 sys 0m0.414s 00:05:35.502 ************************************ 00:05:35.502 END TEST exit_on_failed_rpc_init 00:05:35.502 ************************************ 00:05:35.502 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.502 05:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.502 05:58:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:35.502 ************************************ 00:05:35.502 END TEST skip_rpc 00:05:35.502 ************************************ 00:05:35.502 00:05:35.502 real 0m14.246s 00:05:35.502 user 0m13.243s 00:05:35.502 sys 0m1.551s 00:05:35.502 05:58:00 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.502 05:58:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.502 05:58:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.502 05:58:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.502 05:58:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.502 05:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:35.502 ************************************ 00:05:35.502 START TEST rpc_client 00:05:35.502 ************************************ 00:05:35.502 05:58:00 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.502 * Looking for test storage... 00:05:35.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:35.502 05:58:00 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.502 05:58:00 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.502 05:58:00 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.502 05:58:01 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.502 05:58:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.502 05:58:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.502 05:58:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.502 05:58:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.502 05:58:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.503 05:58:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:35.503 05:58:01 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.503 05:58:01 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.503 --rc genhtml_branch_coverage=1 00:05:35.503 --rc genhtml_function_coverage=1 00:05:35.503 --rc genhtml_legend=1 00:05:35.503 --rc geninfo_all_blocks=1 00:05:35.503 --rc geninfo_unexecuted_blocks=1 00:05:35.503 00:05:35.503 ' 00:05:35.503 05:58:01 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.503 --rc genhtml_branch_coverage=1 00:05:35.503 --rc genhtml_function_coverage=1 00:05:35.503 --rc genhtml_legend=1 00:05:35.503 --rc geninfo_all_blocks=1 00:05:35.503 --rc geninfo_unexecuted_blocks=1 00:05:35.503 00:05:35.503 ' 00:05:35.503 05:58:01 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.503 --rc genhtml_branch_coverage=1 00:05:35.503 --rc genhtml_function_coverage=1 00:05:35.503 --rc genhtml_legend=1 00:05:35.503 --rc geninfo_all_blocks=1 00:05:35.503 --rc geninfo_unexecuted_blocks=1 00:05:35.503 00:05:35.503 ' 00:05:35.503 05:58:01 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.503 --rc genhtml_branch_coverage=1 00:05:35.503 --rc genhtml_function_coverage=1 00:05:35.503 --rc genhtml_legend=1 00:05:35.503 --rc geninfo_all_blocks=1 00:05:35.503 --rc geninfo_unexecuted_blocks=1 00:05:35.503 00:05:35.503 ' 00:05:35.503 05:58:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:35.503 OK 00:05:35.503 05:58:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.503 00:05:35.503 real 0m0.179s 00:05:35.503 user 0m0.103s 00:05:35.503 sys 0m0.082s 00:05:35.503 05:58:01 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.503 05:58:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.503 ************************************ 00:05:35.503 END TEST rpc_client 00:05:35.503 ************************************ 00:05:35.503 05:58:01 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.503 05:58:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.503 05:58:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.503 05:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.763 ************************************ 00:05:35.763 START TEST json_config 00:05:35.763 ************************************ 00:05:35.763 05:58:01 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.763 05:58:01 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.763 05:58:01 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.763 05:58:01 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.763 05:58:01 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.763 05:58:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.763 05:58:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.763 05:58:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.763 05:58:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.763 05:58:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.763 05:58:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.763 05:58:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.763 05:58:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.763 05:58:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.763 05:58:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.763 05:58:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.764 05:58:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.764 05:58:01 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.764 05:58:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.764 05:58:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.764 05:58:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.764 05:58:01 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.764 05:58:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.764 05:58:01 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.764 05:58:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.764 05:58:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.764 05:58:01 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.764 05:58:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.764 05:58:01 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.764 05:58:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.764 05:58:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.764 05:58:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.764 05:58:01 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.764 05:58:01 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.764 05:58:01 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.764 --rc genhtml_branch_coverage=1 00:05:35.764 --rc genhtml_function_coverage=1 00:05:35.764 --rc genhtml_legend=1 00:05:35.764 --rc geninfo_all_blocks=1 00:05:35.764 --rc geninfo_unexecuted_blocks=1 00:05:35.764 00:05:35.764 ' 00:05:35.764 05:58:01 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.764 --rc genhtml_branch_coverage=1 00:05:35.764 --rc genhtml_function_coverage=1 00:05:35.764 --rc genhtml_legend=1 00:05:35.764 --rc geninfo_all_blocks=1 00:05:35.764 --rc geninfo_unexecuted_blocks=1 00:05:35.764 00:05:35.764 ' 00:05:35.764 05:58:01 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.764 --rc genhtml_branch_coverage=1 00:05:35.764 --rc genhtml_function_coverage=1 00:05:35.764 --rc genhtml_legend=1 00:05:35.764 --rc geninfo_all_blocks=1 00:05:35.764 --rc geninfo_unexecuted_blocks=1 00:05:35.764 00:05:35.764 ' 00:05:35.764 05:58:01 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.764 --rc genhtml_branch_coverage=1 00:05:35.764 --rc genhtml_function_coverage=1 00:05:35.764 --rc genhtml_legend=1 00:05:35.764 --rc geninfo_all_blocks=1 00:05:35.764 --rc geninfo_unexecuted_blocks=1 00:05:35.764 00:05:35.764 ' 00:05:35.764 05:58:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c804c1ef-9fa5-4197-9d9f-38b72f371f25 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c804c1ef-9fa5-4197-9d9f-38b72f371f25 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.764 05:58:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.764 05:58:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.764 05:58:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.764 05:58:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.764 05:58:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.764 05:58:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.764 05:58:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.764 05:58:01 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.764 05:58:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.764 05:58:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.764 05:58:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.764 05:58:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.764 05:58:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.764 05:58:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.764 05:58:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.764 WARNING: No tests are enabled so not running JSON configuration tests 00:05:35.764 05:58:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:35.764 05:58:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:35.764 00:05:35.764 real 0m0.156s 00:05:35.764 user 0m0.099s 00:05:35.764 sys 0m0.053s 00:05:35.764 05:58:01 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.764 05:58:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.764 ************************************ 00:05:35.764 END TEST json_config 00:05:35.764 ************************************ 00:05:35.764 05:58:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.764 05:58:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.764 05:58:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.764 05:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.764 ************************************ 00:05:35.764 START TEST json_config_extra_key 00:05:35.764 ************************************ 00:05:35.764 05:58:01 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:36.026 05:58:01 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.026 05:58:01 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.026 05:58:01 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.026 05:58:01 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.026 05:58:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.026 05:58:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.026 05:58:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.026 05:58:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:36.027 05:58:01 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.027 05:58:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.027 --rc genhtml_branch_coverage=1 00:05:36.027 --rc genhtml_function_coverage=1 00:05:36.027 --rc genhtml_legend=1 00:05:36.027 --rc geninfo_all_blocks=1 00:05:36.027 --rc geninfo_unexecuted_blocks=1 00:05:36.027 00:05:36.027 ' 00:05:36.027 05:58:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.027 --rc genhtml_branch_coverage=1 00:05:36.027 --rc genhtml_function_coverage=1 00:05:36.027 --rc genhtml_legend=1 00:05:36.027 --rc geninfo_all_blocks=1 00:05:36.027 --rc geninfo_unexecuted_blocks=1 00:05:36.027 00:05:36.027 ' 00:05:36.027 05:58:01 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.027 --rc genhtml_branch_coverage=1 00:05:36.027 --rc genhtml_function_coverage=1 00:05:36.027 --rc genhtml_legend=1 00:05:36.027 --rc geninfo_all_blocks=1 00:05:36.027 --rc geninfo_unexecuted_blocks=1 00:05:36.027 00:05:36.027 ' 00:05:36.027 05:58:01 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.027 --rc genhtml_branch_coverage=1 00:05:36.027 --rc genhtml_function_coverage=1 00:05:36.027 --rc genhtml_legend=1 00:05:36.027 --rc geninfo_all_blocks=1 00:05:36.027 --rc geninfo_unexecuted_blocks=1 00:05:36.027 00:05:36.027 ' 00:05:36.027 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c804c1ef-9fa5-4197-9d9f-38b72f371f25 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c804c1ef-9fa5-4197-9d9f-38b72f371f25 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.027 05:58:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.027 05:58:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.027 05:58:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.027 05:58:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.027 05:58:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:36.027 05:58:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.027 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.027 05:58:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.027 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:36.027 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:36.027 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.028 INFO: launching applications... 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:36.028 05:58:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70065 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.028 Waiting for target to run... 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70065 /var/tmp/spdk_tgt.sock 00:05:36.028 05:58:01 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70065 ']' 00:05:36.028 05:58:01 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.028 05:58:01 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.028 05:58:01 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.028 05:58:01 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.028 05:58:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.028 05:58:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:36.028 [2024-10-01 05:58:01.540618] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:36.028 [2024-10-01 05:58:01.540745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70065 ] 00:05:36.288 [2024-10-01 05:58:01.902212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.549 [2024-10-01 05:58:01.938930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.809 00:05:36.809 INFO: shutting down applications... 00:05:36.809 05:58:02 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.809 05:58:02 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:36.809 05:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:36.809 05:58:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70065 ]] 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70065 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70065 00:05:36.809 05:58:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.380 05:58:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.381 05:58:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.381 05:58:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70065 00:05:37.381 05:58:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.952 05:58:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.952 05:58:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.952 05:58:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70065 00:05:37.952 SPDK target shutdown done 00:05:37.952 Success 00:05:37.952 05:58:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.952 05:58:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:37.952 05:58:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.952 05:58:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.952 05:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:37.952 00:05:37.952 real 0m2.074s 00:05:37.952 user 0m1.491s 00:05:37.952 sys 0m0.441s 00:05:37.952 ************************************ 00:05:37.952 END TEST json_config_extra_key 00:05:37.952 ************************************ 00:05:37.952 05:58:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.952 05:58:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.952 05:58:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.952 05:58:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.952 05:58:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.952 05:58:03 -- common/autotest_common.sh@10 -- # set +x 00:05:37.952 ************************************ 00:05:37.952 START TEST alias_rpc 00:05:37.952 ************************************ 00:05:37.952 05:58:03 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.952 * Looking for test storage... 00:05:37.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:37.953 05:58:03 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.953 05:58:03 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.953 05:58:03 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.953 05:58:03 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.953 05:58:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.214 05:58:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:38.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.214 --rc genhtml_branch_coverage=1 00:05:38.214 --rc genhtml_function_coverage=1 00:05:38.214 --rc genhtml_legend=1 00:05:38.214 --rc geninfo_all_blocks=1 00:05:38.214 --rc geninfo_unexecuted_blocks=1 00:05:38.214 00:05:38.214 ' 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:38.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.214 --rc genhtml_branch_coverage=1 00:05:38.214 --rc genhtml_function_coverage=1 00:05:38.214 --rc genhtml_legend=1 00:05:38.214 --rc geninfo_all_blocks=1 00:05:38.214 --rc geninfo_unexecuted_blocks=1 00:05:38.214 00:05:38.214 ' 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:38.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.214 --rc genhtml_branch_coverage=1 00:05:38.214 --rc genhtml_function_coverage=1 00:05:38.214 --rc genhtml_legend=1 00:05:38.214 --rc geninfo_all_blocks=1 00:05:38.214 --rc geninfo_unexecuted_blocks=1 00:05:38.214 00:05:38.214 ' 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:38.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.214 --rc genhtml_branch_coverage=1 00:05:38.214 --rc genhtml_function_coverage=1 00:05:38.214 --rc genhtml_legend=1 00:05:38.214 --rc geninfo_all_blocks=1 00:05:38.214 --rc geninfo_unexecuted_blocks=1 00:05:38.214 00:05:38.214 ' 00:05:38.214 05:58:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.214 05:58:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70146 00:05:38.214 05:58:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70146 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70146 ']' 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.214 05:58:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.214 05:58:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.214 [2024-10-01 05:58:03.647348] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:38.214 [2024-10-01 05:58:03.647455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70146 ] 00:05:38.214 [2024-10-01 05:58:03.777087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.214 [2024-10-01 05:58:03.819758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:39.157 05:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:39.157 05:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70146 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70146 ']' 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70146 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70146 00:05:39.157 killing process with pid 70146 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70146' 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@969 -- # kill 70146 00:05:39.157 05:58:04 alias_rpc -- common/autotest_common.sh@974 -- # wait 70146 00:05:39.729 00:05:39.729 real 0m1.610s 00:05:39.729 user 0m1.692s 00:05:39.729 sys 0m0.404s 00:05:39.729 05:58:05 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.729 05:58:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.729 ************************************ 00:05:39.729 END TEST alias_rpc 00:05:39.729 ************************************ 00:05:39.729 05:58:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:39.730 05:58:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.730 05:58:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.730 05:58:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.730 05:58:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.730 ************************************ 00:05:39.730 START TEST spdkcli_tcp 00:05:39.730 ************************************ 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.730 * Looking for test storage... 00:05:39.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.730 05:58:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.730 --rc genhtml_branch_coverage=1 00:05:39.730 --rc genhtml_function_coverage=1 00:05:39.730 --rc genhtml_legend=1 00:05:39.730 --rc geninfo_all_blocks=1 00:05:39.730 --rc geninfo_unexecuted_blocks=1 00:05:39.730 00:05:39.730 ' 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.730 --rc genhtml_branch_coverage=1 00:05:39.730 --rc genhtml_function_coverage=1 00:05:39.730 --rc genhtml_legend=1 00:05:39.730 --rc geninfo_all_blocks=1 00:05:39.730 --rc geninfo_unexecuted_blocks=1 00:05:39.730 00:05:39.730 ' 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.730 --rc genhtml_branch_coverage=1 00:05:39.730 --rc genhtml_function_coverage=1 00:05:39.730 --rc genhtml_legend=1 00:05:39.730 --rc geninfo_all_blocks=1 00:05:39.730 --rc geninfo_unexecuted_blocks=1 00:05:39.730 00:05:39.730 ' 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.730 --rc genhtml_branch_coverage=1 00:05:39.730 --rc genhtml_function_coverage=1 00:05:39.730 --rc genhtml_legend=1 00:05:39.730 --rc geninfo_all_blocks=1 00:05:39.730 --rc geninfo_unexecuted_blocks=1 00:05:39.730 00:05:39.730 ' 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70227 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70227 00:05:39.730 05:58:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70227 ']' 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.730 05:58:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.730 [2024-10-01 05:58:05.310234] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:39.730 [2024-10-01 05:58:05.310358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70227 ] 00:05:39.991 [2024-10-01 05:58:05.445486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.991 [2024-10-01 05:58:05.488043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.991 [2024-10-01 05:58:05.488083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.576 05:58:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.576 05:58:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:40.576 05:58:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70244 00:05:40.576 05:58:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:40.576 05:58:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.851 [ 00:05:40.851 "bdev_malloc_delete", 00:05:40.851 "bdev_malloc_create", 00:05:40.851 "bdev_null_resize", 00:05:40.851 "bdev_null_delete", 00:05:40.851 "bdev_null_create", 00:05:40.851 "bdev_nvme_cuse_unregister", 00:05:40.851 "bdev_nvme_cuse_register", 00:05:40.851 "bdev_opal_new_user", 00:05:40.851 "bdev_opal_set_lock_state", 00:05:40.851 "bdev_opal_delete", 00:05:40.851 "bdev_opal_get_info", 00:05:40.851 "bdev_opal_create", 00:05:40.851 "bdev_nvme_opal_revert", 00:05:40.851 "bdev_nvme_opal_init", 00:05:40.851 "bdev_nvme_send_cmd", 00:05:40.851 "bdev_nvme_set_keys", 00:05:40.851 "bdev_nvme_get_path_iostat", 00:05:40.851 "bdev_nvme_get_mdns_discovery_info", 00:05:40.851 "bdev_nvme_stop_mdns_discovery", 00:05:40.851 "bdev_nvme_start_mdns_discovery", 00:05:40.851 "bdev_nvme_set_multipath_policy", 00:05:40.851 "bdev_nvme_set_preferred_path", 00:05:40.851 "bdev_nvme_get_io_paths", 00:05:40.851 "bdev_nvme_remove_error_injection", 00:05:40.851 "bdev_nvme_add_error_injection", 00:05:40.851 "bdev_nvme_get_discovery_info", 00:05:40.851 "bdev_nvme_stop_discovery", 00:05:40.851 "bdev_nvme_start_discovery", 00:05:40.851 "bdev_nvme_get_controller_health_info", 00:05:40.851 "bdev_nvme_disable_controller", 00:05:40.851 "bdev_nvme_enable_controller", 00:05:40.851 "bdev_nvme_reset_controller", 00:05:40.851 "bdev_nvme_get_transport_statistics", 00:05:40.851 "bdev_nvme_apply_firmware", 00:05:40.851 "bdev_nvme_detach_controller", 00:05:40.851 "bdev_nvme_get_controllers", 00:05:40.851 "bdev_nvme_attach_controller", 00:05:40.851 "bdev_nvme_set_hotplug", 00:05:40.851 "bdev_nvme_set_options", 00:05:40.851 "bdev_passthru_delete", 00:05:40.851 "bdev_passthru_create", 00:05:40.851 "bdev_lvol_set_parent_bdev", 00:05:40.851 "bdev_lvol_set_parent", 00:05:40.851 "bdev_lvol_check_shallow_copy", 00:05:40.851 "bdev_lvol_start_shallow_copy", 00:05:40.851 "bdev_lvol_grow_lvstore", 00:05:40.851 "bdev_lvol_get_lvols", 00:05:40.851 "bdev_lvol_get_lvstores", 00:05:40.851 "bdev_lvol_delete", 00:05:40.851 "bdev_lvol_set_read_only", 00:05:40.851 "bdev_lvol_resize", 00:05:40.851 "bdev_lvol_decouple_parent", 00:05:40.851 "bdev_lvol_inflate", 00:05:40.851 "bdev_lvol_rename", 00:05:40.851 "bdev_lvol_clone_bdev", 00:05:40.851 "bdev_lvol_clone", 00:05:40.851 "bdev_lvol_snapshot", 00:05:40.851 "bdev_lvol_create", 00:05:40.851 "bdev_lvol_delete_lvstore", 00:05:40.851 "bdev_lvol_rename_lvstore", 00:05:40.851 "bdev_lvol_create_lvstore", 00:05:40.851 "bdev_raid_set_options", 00:05:40.851 "bdev_raid_remove_base_bdev", 00:05:40.852 "bdev_raid_add_base_bdev", 00:05:40.852 "bdev_raid_delete", 00:05:40.852 "bdev_raid_create", 00:05:40.852 "bdev_raid_get_bdevs", 00:05:40.852 "bdev_error_inject_error", 00:05:40.852 "bdev_error_delete", 00:05:40.852 "bdev_error_create", 00:05:40.852 "bdev_split_delete", 00:05:40.852 "bdev_split_create", 00:05:40.852 "bdev_delay_delete", 00:05:40.852 "bdev_delay_create", 00:05:40.852 "bdev_delay_update_latency", 00:05:40.852 "bdev_zone_block_delete", 00:05:40.852 "bdev_zone_block_create", 00:05:40.852 "blobfs_create", 00:05:40.852 "blobfs_detect", 00:05:40.852 "blobfs_set_cache_size", 00:05:40.852 "bdev_xnvme_delete", 00:05:40.852 "bdev_xnvme_create", 00:05:40.852 "bdev_aio_delete", 00:05:40.852 "bdev_aio_rescan", 00:05:40.852 "bdev_aio_create", 00:05:40.852 "bdev_ftl_set_property", 00:05:40.852 "bdev_ftl_get_properties", 00:05:40.852 "bdev_ftl_get_stats", 00:05:40.852 "bdev_ftl_unmap", 00:05:40.852 "bdev_ftl_unload", 00:05:40.852 "bdev_ftl_delete", 00:05:40.852 "bdev_ftl_load", 00:05:40.852 "bdev_ftl_create", 00:05:40.852 "bdev_virtio_attach_controller", 00:05:40.852 "bdev_virtio_scsi_get_devices", 00:05:40.852 "bdev_virtio_detach_controller", 00:05:40.852 "bdev_virtio_blk_set_hotplug", 00:05:40.852 "bdev_iscsi_delete", 00:05:40.852 "bdev_iscsi_create", 00:05:40.852 "bdev_iscsi_set_options", 00:05:40.852 "accel_error_inject_error", 00:05:40.852 "ioat_scan_accel_module", 00:05:40.852 "dsa_scan_accel_module", 00:05:40.852 "iaa_scan_accel_module", 00:05:40.852 "keyring_file_remove_key", 00:05:40.852 "keyring_file_add_key", 00:05:40.852 "keyring_linux_set_options", 00:05:40.852 "fsdev_aio_delete", 00:05:40.852 "fsdev_aio_create", 00:05:40.852 "iscsi_get_histogram", 00:05:40.852 "iscsi_enable_histogram", 00:05:40.852 "iscsi_set_options", 00:05:40.852 "iscsi_get_auth_groups", 00:05:40.852 "iscsi_auth_group_remove_secret", 00:05:40.852 "iscsi_auth_group_add_secret", 00:05:40.852 "iscsi_delete_auth_group", 00:05:40.852 "iscsi_create_auth_group", 00:05:40.852 "iscsi_set_discovery_auth", 00:05:40.852 "iscsi_get_options", 00:05:40.852 "iscsi_target_node_request_logout", 00:05:40.852 "iscsi_target_node_set_redirect", 00:05:40.852 "iscsi_target_node_set_auth", 00:05:40.852 "iscsi_target_node_add_lun", 00:05:40.852 "iscsi_get_stats", 00:05:40.852 "iscsi_get_connections", 00:05:40.852 "iscsi_portal_group_set_auth", 00:05:40.852 "iscsi_start_portal_group", 00:05:40.852 "iscsi_delete_portal_group", 00:05:40.852 "iscsi_create_portal_group", 00:05:40.852 "iscsi_get_portal_groups", 00:05:40.852 "iscsi_delete_target_node", 00:05:40.852 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.852 "iscsi_target_node_add_pg_ig_maps", 00:05:40.852 "iscsi_create_target_node", 00:05:40.852 "iscsi_get_target_nodes", 00:05:40.852 "iscsi_delete_initiator_group", 00:05:40.852 "iscsi_initiator_group_remove_initiators", 00:05:40.852 "iscsi_initiator_group_add_initiators", 00:05:40.852 "iscsi_create_initiator_group", 00:05:40.852 "iscsi_get_initiator_groups", 00:05:40.852 "nvmf_set_crdt", 00:05:40.852 "nvmf_set_config", 00:05:40.852 "nvmf_set_max_subsystems", 00:05:40.852 "nvmf_stop_mdns_prr", 00:05:40.852 "nvmf_publish_mdns_prr", 00:05:40.852 "nvmf_subsystem_get_listeners", 00:05:40.852 "nvmf_subsystem_get_qpairs", 00:05:40.852 "nvmf_subsystem_get_controllers", 00:05:40.852 "nvmf_get_stats", 00:05:40.852 "nvmf_get_transports", 00:05:40.852 "nvmf_create_transport", 00:05:40.852 "nvmf_get_targets", 00:05:40.852 "nvmf_delete_target", 00:05:40.852 "nvmf_create_target", 00:05:40.852 "nvmf_subsystem_allow_any_host", 00:05:40.852 "nvmf_subsystem_set_keys", 00:05:40.852 "nvmf_subsystem_remove_host", 00:05:40.852 "nvmf_subsystem_add_host", 00:05:40.852 "nvmf_ns_remove_host", 00:05:40.852 "nvmf_ns_add_host", 00:05:40.852 "nvmf_subsystem_remove_ns", 00:05:40.852 "nvmf_subsystem_set_ns_ana_group", 00:05:40.852 "nvmf_subsystem_add_ns", 00:05:40.852 "nvmf_subsystem_listener_set_ana_state", 00:05:40.852 "nvmf_discovery_get_referrals", 00:05:40.852 "nvmf_discovery_remove_referral", 00:05:40.852 "nvmf_discovery_add_referral", 00:05:40.852 "nvmf_subsystem_remove_listener", 00:05:40.852 "nvmf_subsystem_add_listener", 00:05:40.852 "nvmf_delete_subsystem", 00:05:40.852 "nvmf_create_subsystem", 00:05:40.852 "nvmf_get_subsystems", 00:05:40.852 "env_dpdk_get_mem_stats", 00:05:40.852 "nbd_get_disks", 00:05:40.852 "nbd_stop_disk", 00:05:40.852 "nbd_start_disk", 00:05:40.852 "ublk_recover_disk", 00:05:40.852 "ublk_get_disks", 00:05:40.852 "ublk_stop_disk", 00:05:40.852 "ublk_start_disk", 00:05:40.852 "ublk_destroy_target", 00:05:40.852 "ublk_create_target", 00:05:40.852 "virtio_blk_create_transport", 00:05:40.852 "virtio_blk_get_transports", 00:05:40.852 "vhost_controller_set_coalescing", 00:05:40.852 "vhost_get_controllers", 00:05:40.852 "vhost_delete_controller", 00:05:40.852 "vhost_create_blk_controller", 00:05:40.852 "vhost_scsi_controller_remove_target", 00:05:40.852 "vhost_scsi_controller_add_target", 00:05:40.852 "vhost_start_scsi_controller", 00:05:40.852 "vhost_create_scsi_controller", 00:05:40.852 "thread_set_cpumask", 00:05:40.852 "scheduler_set_options", 00:05:40.852 "framework_get_governor", 00:05:40.852 "framework_get_scheduler", 00:05:40.852 "framework_set_scheduler", 00:05:40.852 "framework_get_reactors", 00:05:40.852 "thread_get_io_channels", 00:05:40.852 "thread_get_pollers", 00:05:40.852 "thread_get_stats", 00:05:40.852 "framework_monitor_context_switch", 00:05:40.852 "spdk_kill_instance", 00:05:40.852 "log_enable_timestamps", 00:05:40.852 "log_get_flags", 00:05:40.852 "log_clear_flag", 00:05:40.852 "log_set_flag", 00:05:40.852 "log_get_level", 00:05:40.852 "log_set_level", 00:05:40.852 "log_get_print_level", 00:05:40.852 "log_set_print_level", 00:05:40.852 "framework_enable_cpumask_locks", 00:05:40.852 "framework_disable_cpumask_locks", 00:05:40.852 "framework_wait_init", 00:05:40.852 "framework_start_init", 00:05:40.852 "scsi_get_devices", 00:05:40.852 "bdev_get_histogram", 00:05:40.852 "bdev_enable_histogram", 00:05:40.852 "bdev_set_qos_limit", 00:05:40.852 "bdev_set_qd_sampling_period", 00:05:40.852 "bdev_get_bdevs", 00:05:40.852 "bdev_reset_iostat", 00:05:40.852 "bdev_get_iostat", 00:05:40.852 "bdev_examine", 00:05:40.852 "bdev_wait_for_examine", 00:05:40.852 "bdev_set_options", 00:05:40.852 "accel_get_stats", 00:05:40.852 "accel_set_options", 00:05:40.852 "accel_set_driver", 00:05:40.852 "accel_crypto_key_destroy", 00:05:40.852 "accel_crypto_keys_get", 00:05:40.852 "accel_crypto_key_create", 00:05:40.852 "accel_assign_opc", 00:05:40.852 "accel_get_module_info", 00:05:40.852 "accel_get_opc_assignments", 00:05:40.852 "vmd_rescan", 00:05:40.852 "vmd_remove_device", 00:05:40.852 "vmd_enable", 00:05:40.852 "sock_get_default_impl", 00:05:40.852 "sock_set_default_impl", 00:05:40.852 "sock_impl_set_options", 00:05:40.852 "sock_impl_get_options", 00:05:40.852 "iobuf_get_stats", 00:05:40.852 "iobuf_set_options", 00:05:40.852 "keyring_get_keys", 00:05:40.852 "framework_get_pci_devices", 00:05:40.852 "framework_get_config", 00:05:40.852 "framework_get_subsystems", 00:05:40.852 "fsdev_set_opts", 00:05:40.852 "fsdev_get_opts", 00:05:40.852 "trace_get_info", 00:05:40.852 "trace_get_tpoint_group_mask", 00:05:40.852 "trace_disable_tpoint_group", 00:05:40.852 "trace_enable_tpoint_group", 00:05:40.852 "trace_clear_tpoint_mask", 00:05:40.852 "trace_set_tpoint_mask", 00:05:40.852 "notify_get_notifications", 00:05:40.852 "notify_get_types", 00:05:40.852 "spdk_get_version", 00:05:40.852 "rpc_get_methods" 00:05:40.852 ] 00:05:40.852 05:58:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.852 05:58:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:40.852 05:58:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70227 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70227 ']' 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70227 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70227 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.852 killing process with pid 70227 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70227' 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70227 00:05:40.852 05:58:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70227 00:05:41.114 00:05:41.114 real 0m1.635s 00:05:41.114 user 0m2.838s 00:05:41.114 sys 0m0.442s 00:05:41.114 05:58:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.114 05:58:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.114 ************************************ 00:05:41.114 END TEST spdkcli_tcp 00:05:41.114 ************************************ 00:05:41.376 05:58:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.376 05:58:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.376 05:58:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.376 05:58:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.376 ************************************ 00:05:41.376 START TEST dpdk_mem_utility 00:05:41.376 ************************************ 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.376 * Looking for test storage... 00:05:41.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.376 05:58:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.376 --rc genhtml_branch_coverage=1 00:05:41.376 --rc genhtml_function_coverage=1 00:05:41.376 --rc genhtml_legend=1 00:05:41.376 --rc geninfo_all_blocks=1 00:05:41.376 --rc geninfo_unexecuted_blocks=1 00:05:41.376 00:05:41.376 ' 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.376 --rc genhtml_branch_coverage=1 00:05:41.376 --rc genhtml_function_coverage=1 00:05:41.376 --rc genhtml_legend=1 00:05:41.376 --rc geninfo_all_blocks=1 00:05:41.376 --rc geninfo_unexecuted_blocks=1 00:05:41.376 00:05:41.376 ' 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.376 --rc genhtml_branch_coverage=1 00:05:41.376 --rc genhtml_function_coverage=1 00:05:41.376 --rc genhtml_legend=1 00:05:41.376 --rc geninfo_all_blocks=1 00:05:41.376 --rc geninfo_unexecuted_blocks=1 00:05:41.376 00:05:41.376 ' 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.376 --rc genhtml_branch_coverage=1 00:05:41.376 --rc genhtml_function_coverage=1 00:05:41.376 --rc genhtml_legend=1 00:05:41.376 --rc geninfo_all_blocks=1 00:05:41.376 --rc geninfo_unexecuted_blocks=1 00:05:41.376 00:05:41.376 ' 00:05:41.376 05:58:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:41.376 05:58:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70321 00:05:41.376 05:58:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70321 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70321 ']' 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.376 05:58:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.376 05:58:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.376 [2024-10-01 05:58:06.976828] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:41.376 [2024-10-01 05:58:06.977223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70321 ] 00:05:41.638 [2024-10-01 05:58:07.108188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.638 [2024-10-01 05:58:07.153676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.211 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.211 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:42.211 05:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:42.211 05:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:42.211 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.211 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.211 { 00:05:42.211 "filename": "/tmp/spdk_mem_dump.txt" 00:05:42.211 } 00:05:42.211 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.211 05:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:42.474 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:42.474 1 heaps totaling size 860.000000 MiB 00:05:42.474 size: 860.000000 MiB heap id: 0 00:05:42.474 end heaps---------- 00:05:42.474 9 mempools totaling size 642.649841 MiB 00:05:42.474 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:42.474 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:42.474 size: 92.545471 MiB name: bdev_io_70321 00:05:42.474 size: 51.011292 MiB name: evtpool_70321 00:05:42.474 size: 50.003479 MiB name: msgpool_70321 00:05:42.474 size: 36.509338 MiB name: fsdev_io_70321 00:05:42.474 size: 21.763794 MiB name: PDU_Pool 00:05:42.474 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:42.474 size: 0.026123 MiB name: Session_Pool 00:05:42.474 end mempools------- 00:05:42.474 6 memzones totaling size 4.142822 MiB 00:05:42.474 size: 1.000366 MiB name: RG_ring_0_70321 00:05:42.474 size: 1.000366 MiB name: RG_ring_1_70321 00:05:42.474 size: 1.000366 MiB name: RG_ring_4_70321 00:05:42.474 size: 1.000366 MiB name: RG_ring_5_70321 00:05:42.474 size: 0.125366 MiB name: RG_ring_2_70321 00:05:42.474 size: 0.015991 MiB name: RG_ring_3_70321 00:05:42.474 end memzones------- 00:05:42.474 05:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:42.474 heap id: 0 total size: 860.000000 MiB number of busy elements: 313 number of free elements: 16 00:05:42.474 list of free elements. size: 13.935425 MiB 00:05:42.474 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:42.474 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:42.474 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:42.474 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:42.474 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:42.474 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:42.474 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:42.474 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:42.474 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:42.474 element at address: 0x20001d800000 with size: 0.566956 MiB 00:05:42.474 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:42.474 element at address: 0x200003e00000 with size: 0.488281 MiB 00:05:42.474 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:42.474 element at address: 0x200007000000 with size: 0.480286 MiB 00:05:42.474 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:42.474 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:42.474 list of standard malloc elements. size: 199.267883 MiB 00:05:42.474 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:42.474 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:42.474 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:42.474 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:42.474 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:42.474 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:42.474 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:42.474 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:42.474 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:42.474 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:42.474 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:42.475 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707af40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:42.475 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891240 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891300 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891480 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891540 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891600 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:42.475 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:42.476 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:42.476 list of memzone associated elements. size: 646.796692 MiB 00:05:42.476 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:42.476 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:42.476 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:42.476 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:42.476 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:42.476 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70321_0 00:05:42.476 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:42.476 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70321_0 00:05:42.477 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:42.477 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70321_0 00:05:42.477 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:42.477 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70321_0 00:05:42.477 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:42.477 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:42.477 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:42.477 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:42.477 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:42.477 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70321 00:05:42.477 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:42.477 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70321 00:05:42.477 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:42.477 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70321 00:05:42.477 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:42.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:42.477 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:42.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:42.477 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:42.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:42.477 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:42.477 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:42.477 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:42.477 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70321 00:05:42.477 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:42.477 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70321 00:05:42.477 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:42.477 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70321 00:05:42.477 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:42.477 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70321 00:05:42.477 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:42.477 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70321 00:05:42.477 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:42.477 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70321 00:05:42.477 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:42.477 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:42.477 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:42.477 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:42.477 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:42.477 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:42.477 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:05:42.477 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70321 00:05:42.477 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:42.477 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:42.477 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:42.477 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:42.477 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:05:42.477 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70321 00:05:42.477 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:42.477 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:42.477 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:42.477 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70321 00:05:42.477 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:42.477 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70321 00:05:42.477 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:05:42.477 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70321 00:05:42.477 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:42.477 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:42.477 05:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:42.477 05:58:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70321 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70321 ']' 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70321 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70321 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.477 killing process with pid 70321 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70321' 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70321 00:05:42.477 05:58:07 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70321 00:05:42.738 00:05:42.738 real 0m1.428s 00:05:42.738 user 0m1.484s 00:05:42.738 sys 0m0.349s 00:05:42.738 05:58:08 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.738 05:58:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.738 ************************************ 00:05:42.738 END TEST dpdk_mem_utility 00:05:42.738 ************************************ 00:05:42.738 05:58:08 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.738 05:58:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.738 05:58:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.738 05:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:42.738 ************************************ 00:05:42.738 START TEST event 00:05:42.738 ************************************ 00:05:42.738 05:58:08 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.738 * Looking for test storage... 00:05:42.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.738 05:58:08 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.738 05:58:08 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.738 05:58:08 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.738 05:58:08 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.738 05:58:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.738 05:58:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.738 05:58:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.738 05:58:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.738 05:58:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.738 05:58:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.738 05:58:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.738 05:58:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.738 05:58:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.738 05:58:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.738 05:58:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.738 05:58:08 event -- scripts/common.sh@344 -- # case "$op" in 00:05:42.738 05:58:08 event -- scripts/common.sh@345 -- # : 1 00:05:42.738 05:58:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.738 05:58:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.738 05:58:08 event -- scripts/common.sh@365 -- # decimal 1 00:05:42.999 05:58:08 event -- scripts/common.sh@353 -- # local d=1 00:05:42.999 05:58:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.999 05:58:08 event -- scripts/common.sh@355 -- # echo 1 00:05:42.999 05:58:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.999 05:58:08 event -- scripts/common.sh@366 -- # decimal 2 00:05:42.999 05:58:08 event -- scripts/common.sh@353 -- # local d=2 00:05:42.999 05:58:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.999 05:58:08 event -- scripts/common.sh@355 -- # echo 2 00:05:42.999 05:58:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.999 05:58:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.999 05:58:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.999 05:58:08 event -- scripts/common.sh@368 -- # return 0 00:05:42.999 05:58:08 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.999 05:58:08 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.999 --rc genhtml_branch_coverage=1 00:05:42.999 --rc genhtml_function_coverage=1 00:05:42.999 --rc genhtml_legend=1 00:05:42.999 --rc geninfo_all_blocks=1 00:05:42.999 --rc geninfo_unexecuted_blocks=1 00:05:42.999 00:05:42.999 ' 00:05:42.999 05:58:08 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.999 --rc genhtml_branch_coverage=1 00:05:42.999 --rc genhtml_function_coverage=1 00:05:42.999 --rc genhtml_legend=1 00:05:42.999 --rc geninfo_all_blocks=1 00:05:42.999 --rc geninfo_unexecuted_blocks=1 00:05:42.999 00:05:42.999 ' 00:05:42.999 05:58:08 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.999 --rc genhtml_branch_coverage=1 00:05:42.999 --rc genhtml_function_coverage=1 00:05:42.999 --rc genhtml_legend=1 00:05:42.999 --rc geninfo_all_blocks=1 00:05:42.999 --rc geninfo_unexecuted_blocks=1 00:05:42.999 00:05:42.999 ' 00:05:42.999 05:58:08 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.999 --rc genhtml_branch_coverage=1 00:05:42.999 --rc genhtml_function_coverage=1 00:05:42.999 --rc genhtml_legend=1 00:05:42.999 --rc geninfo_all_blocks=1 00:05:42.999 --rc geninfo_unexecuted_blocks=1 00:05:42.999 00:05:42.999 ' 00:05:42.999 05:58:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:42.999 05:58:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.999 05:58:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.999 05:58:08 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:42.999 05:58:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.999 05:58:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.999 ************************************ 00:05:42.999 START TEST event_perf 00:05:42.999 ************************************ 00:05:43.000 05:58:08 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:43.000 Running I/O for 1 seconds...[2024-10-01 05:58:08.391681] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:43.000 [2024-10-01 05:58:08.391787] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70402 ] 00:05:43.000 [2024-10-01 05:58:08.527305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.000 [2024-10-01 05:58:08.560684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.000 [2024-10-01 05:58:08.560995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.000 [2024-10-01 05:58:08.561063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.000 [2024-10-01 05:58:08.561142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.389 Running I/O for 1 seconds... 00:05:44.389 lcore 0: 187238 00:05:44.389 lcore 1: 187239 00:05:44.389 lcore 2: 187238 00:05:44.389 lcore 3: 187238 00:05:44.389 done. 00:05:44.389 00:05:44.389 real 0m1.250s 00:05:44.389 user 0m4.063s 00:05:44.389 sys 0m0.071s 00:05:44.389 05:58:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.389 ************************************ 00:05:44.389 05:58:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.389 END TEST event_perf 00:05:44.389 ************************************ 00:05:44.389 05:58:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.389 05:58:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:44.389 05:58:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.389 05:58:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.389 ************************************ 00:05:44.389 START TEST event_reactor 00:05:44.389 ************************************ 00:05:44.389 05:58:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.389 [2024-10-01 05:58:09.681458] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:44.389 [2024-10-01 05:58:09.681565] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70441 ] 00:05:44.389 [2024-10-01 05:58:09.816154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.389 [2024-10-01 05:58:09.846958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.333 test_start 00:05:45.333 oneshot 00:05:45.333 tick 100 00:05:45.333 tick 100 00:05:45.333 tick 250 00:05:45.333 tick 100 00:05:45.333 tick 100 00:05:45.333 tick 100 00:05:45.333 tick 250 00:05:45.333 tick 500 00:05:45.333 tick 100 00:05:45.333 tick 100 00:05:45.333 tick 250 00:05:45.333 tick 100 00:05:45.333 tick 100 00:05:45.333 test_end 00:05:45.333 00:05:45.333 real 0m1.243s 00:05:45.333 user 0m1.085s 00:05:45.333 sys 0m0.052s 00:05:45.333 05:58:10 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.333 05:58:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:45.333 ************************************ 00:05:45.333 END TEST event_reactor 00:05:45.333 ************************************ 00:05:45.333 05:58:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.333 05:58:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:45.333 05:58:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.333 05:58:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.333 ************************************ 00:05:45.333 START TEST event_reactor_perf 00:05:45.333 ************************************ 00:05:45.333 05:58:10 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.596 [2024-10-01 05:58:10.966898] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:45.596 [2024-10-01 05:58:10.967007] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70472 ] 00:05:45.596 [2024-10-01 05:58:11.100678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.596 [2024-10-01 05:58:11.131753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.988 test_start 00:05:46.988 test_end 00:05:46.988 Performance: 312038 events per second 00:05:46.988 00:05:46.988 real 0m1.246s 00:05:46.988 user 0m1.082s 00:05:46.988 sys 0m0.057s 00:05:46.988 05:58:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.988 05:58:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.988 ************************************ 00:05:46.988 END TEST event_reactor_perf 00:05:46.988 ************************************ 00:05:46.988 05:58:12 event -- event/event.sh@49 -- # uname -s 00:05:46.988 05:58:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.988 05:58:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.988 05:58:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.988 05:58:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.988 05:58:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.988 ************************************ 00:05:46.988 START TEST event_scheduler 00:05:46.988 ************************************ 00:05:46.988 05:58:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.988 * Looking for test storage... 00:05:46.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:46.988 05:58:12 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.988 05:58:12 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.988 05:58:12 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.988 05:58:12 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.988 05:58:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:46.988 05:58:12 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.988 05:58:12 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.989 --rc genhtml_branch_coverage=1 00:05:46.989 --rc genhtml_function_coverage=1 00:05:46.989 --rc genhtml_legend=1 00:05:46.989 --rc geninfo_all_blocks=1 00:05:46.989 --rc geninfo_unexecuted_blocks=1 00:05:46.989 00:05:46.989 ' 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.989 --rc genhtml_branch_coverage=1 00:05:46.989 --rc genhtml_function_coverage=1 00:05:46.989 --rc genhtml_legend=1 00:05:46.989 --rc geninfo_all_blocks=1 00:05:46.989 --rc geninfo_unexecuted_blocks=1 00:05:46.989 00:05:46.989 ' 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.989 --rc genhtml_branch_coverage=1 00:05:46.989 --rc genhtml_function_coverage=1 00:05:46.989 --rc genhtml_legend=1 00:05:46.989 --rc geninfo_all_blocks=1 00:05:46.989 --rc geninfo_unexecuted_blocks=1 00:05:46.989 00:05:46.989 ' 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.989 --rc genhtml_branch_coverage=1 00:05:46.989 --rc genhtml_function_coverage=1 00:05:46.989 --rc genhtml_legend=1 00:05:46.989 --rc geninfo_all_blocks=1 00:05:46.989 --rc geninfo_unexecuted_blocks=1 00:05:46.989 00:05:46.989 ' 00:05:46.989 05:58:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:46.989 05:58:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70543 00:05:46.989 05:58:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.989 05:58:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70543 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70543 ']' 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.989 05:58:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:46.989 05:58:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 [2024-10-01 05:58:12.427431] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:46.989 [2024-10-01 05:58:12.427549] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70543 ] 00:05:46.989 [2024-10-01 05:58:12.562014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.989 [2024-10-01 05:58:12.596561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.989 [2024-10-01 05:58:12.596897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.989 [2024-10-01 05:58:12.596962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.989 [2024-10-01 05:58:12.596910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:47.930 05:58:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.930 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.930 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.930 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.930 POWER: Cannot set governor of lcore 0 to performance 00:05:47.930 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.930 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.930 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:47.930 POWER: Unable to set Power Management Environment for lcore 0 00:05:47.930 [2024-10-01 05:58:13.274428] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:47.930 [2024-10-01 05:58:13.274448] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:47.930 [2024-10-01 05:58:13.274466] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:47.930 [2024-10-01 05:58:13.274486] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:47.930 [2024-10-01 05:58:13.274494] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:47.930 [2024-10-01 05:58:13.274506] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.930 05:58:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.930 [2024-10-01 05:58:13.329101] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.930 05:58:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.930 05:58:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.930 ************************************ 00:05:47.930 START TEST scheduler_create_thread 00:05:47.930 ************************************ 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.930 2 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.930 3 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.930 4 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.930 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.931 5 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.931 6 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.931 7 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.931 8 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.931 9 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.931 10 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.931 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.502 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.502 05:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.502 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.502 05:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.880 05:58:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.880 05:58:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:49.880 05:58:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:49.880 05:58:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.880 05:58:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.821 05:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.821 00:05:50.821 real 0m3.087s 00:05:50.821 user 0m0.014s 00:05:50.821 sys 0m0.005s 00:05:50.821 05:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.821 ************************************ 00:05:50.821 END TEST scheduler_create_thread 00:05:50.821 ************************************ 00:05:50.821 05:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.082 05:58:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.082 05:58:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70543 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70543 ']' 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70543 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70543 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:51.082 killing process with pid 70543 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70543' 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70543 00:05:51.082 05:58:16 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70543 00:05:51.342 [2024-10-01 05:58:16.805651] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:51.602 00:05:51.602 real 0m4.738s 00:05:51.602 user 0m9.016s 00:05:51.602 sys 0m0.307s 00:05:51.602 05:58:16 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.602 05:58:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.602 ************************************ 00:05:51.602 END TEST event_scheduler 00:05:51.602 ************************************ 00:05:51.602 05:58:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:51.602 05:58:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:51.602 05:58:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.602 05:58:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.602 05:58:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.602 ************************************ 00:05:51.602 START TEST app_repeat 00:05:51.602 ************************************ 00:05:51.602 05:58:17 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70643 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.602 Process app_repeat pid: 70643 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70643' 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.602 spdk_app_start Round 0 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70643 /var/tmp/spdk-nbd.sock 00:05:51.602 05:58:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70643 ']' 00:05:51.602 05:58:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.602 05:58:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.602 05:58:17 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:51.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.602 05:58:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.602 05:58:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.602 05:58:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.602 [2024-10-01 05:58:17.048856] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:05:51.602 [2024-10-01 05:58:17.048985] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70643 ] 00:05:51.602 [2024-10-01 05:58:17.174440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.602 [2024-10-01 05:58:17.203655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.602 [2024-10-01 05:58:17.203722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.541 05:58:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.541 05:58:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:52.541 05:58:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.541 Malloc0 00:05:52.541 05:58:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.798 Malloc1 00:05:52.798 05:58:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.798 05:58:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.055 /dev/nbd0 00:05:53.055 05:58:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.055 05:58:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.055 1+0 records in 00:05:53.055 1+0 records out 00:05:53.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176393 s, 23.2 MB/s 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:53.055 05:58:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:53.055 05:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.055 05:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.055 05:58:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.312 /dev/nbd1 00:05:53.312 05:58:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.312 05:58:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.312 1+0 records in 00:05:53.312 1+0 records out 00:05:53.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263295 s, 15.6 MB/s 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:53.312 05:58:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:53.312 05:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.312 05:58:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.312 05:58:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.312 05:58:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.312 05:58:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.575 05:58:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.575 { 00:05:53.575 "nbd_device": "/dev/nbd0", 00:05:53.575 "bdev_name": "Malloc0" 00:05:53.575 }, 00:05:53.575 { 00:05:53.575 "nbd_device": "/dev/nbd1", 00:05:53.575 "bdev_name": "Malloc1" 00:05:53.575 } 00:05:53.575 ]' 00:05:53.575 05:58:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.575 05:58:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.575 { 00:05:53.575 "nbd_device": "/dev/nbd0", 00:05:53.575 "bdev_name": "Malloc0" 00:05:53.575 }, 00:05:53.575 { 00:05:53.575 "nbd_device": "/dev/nbd1", 00:05:53.575 "bdev_name": "Malloc1" 00:05:53.575 } 00:05:53.575 ]' 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.575 /dev/nbd1' 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.575 /dev/nbd1' 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.575 256+0 records in 00:05:53.575 256+0 records out 00:05:53.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556124 s, 189 MB/s 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.575 256+0 records in 00:05:53.575 256+0 records out 00:05:53.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182516 s, 57.5 MB/s 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.575 256+0 records in 00:05:53.575 256+0 records out 00:05:53.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152646 s, 68.7 MB/s 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.575 05:58:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.576 05:58:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.576 05:58:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.576 05:58:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.576 05:58:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.834 05:58:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.091 05:58:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.349 05:58:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.349 05:58:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.606 05:58:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.606 [2024-10-01 05:58:20.124097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.606 [2024-10-01 05:58:20.151466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.606 [2024-10-01 05:58:20.151711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.606 [2024-10-01 05:58:20.180863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.606 [2024-10-01 05:58:20.180920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.901 05:58:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.901 spdk_app_start Round 1 00:05:57.901 05:58:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:57.901 05:58:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70643 /var/tmp/spdk-nbd.sock 00:05:57.901 05:58:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70643 ']' 00:05:57.901 05:58:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.901 05:58:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.901 05:58:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.901 05:58:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.901 05:58:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.901 05:58:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.901 05:58:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.901 05:58:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.901 Malloc0 00:05:57.901 05:58:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.162 Malloc1 00:05:58.162 05:58:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.162 05:58:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.422 /dev/nbd0 00:05:58.422 05:58:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.422 05:58:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.422 1+0 records in 00:05:58.422 1+0 records out 00:05:58.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241002 s, 17.0 MB/s 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.422 05:58:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.422 05:58:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.422 05:58:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.422 05:58:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.681 /dev/nbd1 00:05:58.681 05:58:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.681 05:58:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.681 1+0 records in 00:05:58.681 1+0 records out 00:05:58.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319984 s, 12.8 MB/s 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.681 05:58:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.681 05:58:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.681 05:58:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.681 05:58:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.681 05:58:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.681 05:58:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.939 05:58:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.939 { 00:05:58.939 "nbd_device": "/dev/nbd0", 00:05:58.939 "bdev_name": "Malloc0" 00:05:58.939 }, 00:05:58.939 { 00:05:58.939 "nbd_device": "/dev/nbd1", 00:05:58.939 "bdev_name": "Malloc1" 00:05:58.939 } 00:05:58.939 ]' 00:05:58.939 05:58:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.940 { 00:05:58.940 "nbd_device": "/dev/nbd0", 00:05:58.940 "bdev_name": "Malloc0" 00:05:58.940 }, 00:05:58.940 { 00:05:58.940 "nbd_device": "/dev/nbd1", 00:05:58.940 "bdev_name": "Malloc1" 00:05:58.940 } 00:05:58.940 ]' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.940 /dev/nbd1' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.940 /dev/nbd1' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.940 256+0 records in 00:05:58.940 256+0 records out 00:05:58.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00986987 s, 106 MB/s 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.940 256+0 records in 00:05:58.940 256+0 records out 00:05:58.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144789 s, 72.4 MB/s 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.940 256+0 records in 00:05:58.940 256+0 records out 00:05:58.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200455 s, 52.3 MB/s 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.940 05:58:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.198 05:58:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.457 05:58:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.457 05:58:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.457 05:58:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.457 05:58:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.457 05:58:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.716 05:58:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.974 [2024-10-01 05:58:25.353639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.974 [2024-10-01 05:58:25.380543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.974 [2024-10-01 05:58:25.380624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.974 [2024-10-01 05:58:25.408930] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.974 [2024-10-01 05:58:25.408978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.343 05:58:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.343 05:58:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:03.343 spdk_app_start Round 2 00:06:03.343 05:58:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70643 /var/tmp/spdk-nbd.sock 00:06:03.343 05:58:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70643 ']' 00:06:03.343 05:58:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.343 05:58:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.343 05:58:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.343 05:58:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.343 05:58:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.343 05:58:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.343 05:58:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:03.343 05:58:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.343 Malloc0 00:06:03.343 05:58:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.343 Malloc1 00:06:03.343 05:58:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.343 05:58:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.603 /dev/nbd0 00:06:03.603 05:58:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.603 05:58:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.603 1+0 records in 00:06:03.603 1+0 records out 00:06:03.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274859 s, 14.9 MB/s 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:03.603 05:58:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:03.603 05:58:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.603 05:58:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.603 05:58:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.864 /dev/nbd1 00:06:03.864 05:58:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.864 05:58:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.864 1+0 records in 00:06:03.864 1+0 records out 00:06:03.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254617 s, 16.1 MB/s 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:03.864 05:58:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:03.864 05:58:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.864 05:58:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.864 05:58:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.864 05:58:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.864 05:58:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.124 05:58:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.124 { 00:06:04.124 "nbd_device": "/dev/nbd0", 00:06:04.124 "bdev_name": "Malloc0" 00:06:04.124 }, 00:06:04.124 { 00:06:04.124 "nbd_device": "/dev/nbd1", 00:06:04.124 "bdev_name": "Malloc1" 00:06:04.124 } 00:06:04.124 ]' 00:06:04.124 05:58:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.124 { 00:06:04.124 "nbd_device": "/dev/nbd0", 00:06:04.125 "bdev_name": "Malloc0" 00:06:04.125 }, 00:06:04.125 { 00:06:04.125 "nbd_device": "/dev/nbd1", 00:06:04.125 "bdev_name": "Malloc1" 00:06:04.125 } 00:06:04.125 ]' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.125 /dev/nbd1' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.125 /dev/nbd1' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.125 256+0 records in 00:06:04.125 256+0 records out 00:06:04.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00812404 s, 129 MB/s 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.125 256+0 records in 00:06:04.125 256+0 records out 00:06:04.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140854 s, 74.4 MB/s 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.125 256+0 records in 00:06:04.125 256+0 records out 00:06:04.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152701 s, 68.7 MB/s 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.125 05:58:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.387 05:58:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.647 05:58:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.905 05:58:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.905 05:58:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.164 05:58:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.164 [2024-10-01 05:58:30.629636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.164 [2024-10-01 05:58:30.659600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.164 [2024-10-01 05:58:30.659616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.164 [2024-10-01 05:58:30.688646] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.164 [2024-10-01 05:58:30.688692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.444 05:58:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70643 /var/tmp/spdk-nbd.sock 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70643 ']' 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:08.444 05:58:33 event.app_repeat -- event/event.sh@39 -- # killprocess 70643 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70643 ']' 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70643 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70643 00:06:08.444 killing process with pid 70643 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70643' 00:06:08.444 05:58:33 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70643 00:06:08.445 05:58:33 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70643 00:06:08.445 spdk_app_start is called in Round 0. 00:06:08.445 Shutdown signal received, stop current app iteration 00:06:08.445 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:06:08.445 spdk_app_start is called in Round 1. 00:06:08.445 Shutdown signal received, stop current app iteration 00:06:08.445 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:06:08.445 spdk_app_start is called in Round 2. 00:06:08.445 Shutdown signal received, stop current app iteration 00:06:08.445 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:06:08.445 spdk_app_start is called in Round 3. 00:06:08.445 Shutdown signal received, stop current app iteration 00:06:08.445 05:58:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:08.445 05:58:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:08.445 00:06:08.445 real 0m16.883s 00:06:08.445 user 0m37.776s 00:06:08.445 sys 0m2.014s 00:06:08.445 05:58:33 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.445 05:58:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.445 ************************************ 00:06:08.445 END TEST app_repeat 00:06:08.445 ************************************ 00:06:08.445 05:58:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:08.445 05:58:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.445 05:58:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.445 05:58:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.445 05:58:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.445 ************************************ 00:06:08.445 START TEST cpu_locks 00:06:08.445 ************************************ 00:06:08.445 05:58:33 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:08.445 * Looking for test storage... 00:06:08.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:08.445 05:58:33 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.445 05:58:33 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.445 05:58:33 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.445 05:58:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.445 --rc genhtml_branch_coverage=1 00:06:08.445 --rc genhtml_function_coverage=1 00:06:08.445 --rc genhtml_legend=1 00:06:08.445 --rc geninfo_all_blocks=1 00:06:08.445 --rc geninfo_unexecuted_blocks=1 00:06:08.445 00:06:08.445 ' 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.445 --rc genhtml_branch_coverage=1 00:06:08.445 --rc genhtml_function_coverage=1 00:06:08.445 --rc genhtml_legend=1 00:06:08.445 --rc geninfo_all_blocks=1 00:06:08.445 --rc geninfo_unexecuted_blocks=1 00:06:08.445 00:06:08.445 ' 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.445 --rc genhtml_branch_coverage=1 00:06:08.445 --rc genhtml_function_coverage=1 00:06:08.445 --rc genhtml_legend=1 00:06:08.445 --rc geninfo_all_blocks=1 00:06:08.445 --rc geninfo_unexecuted_blocks=1 00:06:08.445 00:06:08.445 ' 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.445 --rc genhtml_branch_coverage=1 00:06:08.445 --rc genhtml_function_coverage=1 00:06:08.445 --rc genhtml_legend=1 00:06:08.445 --rc geninfo_all_blocks=1 00:06:08.445 --rc geninfo_unexecuted_blocks=1 00:06:08.445 00:06:08.445 ' 00:06:08.445 05:58:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:08.445 05:58:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:08.445 05:58:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:08.445 05:58:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.445 05:58:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.704 ************************************ 00:06:08.704 START TEST default_locks 00:06:08.704 ************************************ 00:06:08.704 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:08.704 05:58:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71063 00:06:08.704 05:58:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71063 00:06:08.704 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71063 ']' 00:06:08.704 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.704 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.705 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.705 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.705 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.705 05:58:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.705 [2024-10-01 05:58:34.128184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:08.705 [2024-10-01 05:58:34.128312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71063 ] 00:06:08.705 [2024-10-01 05:58:34.262670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.705 [2024-10-01 05:58:34.292432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.639 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.639 05:58:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:09.639 05:58:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71063 00:06:09.639 05:58:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71063 00:06:09.639 05:58:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71063 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71063 ']' 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71063 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71063 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.639 killing process with pid 71063 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71063' 00:06:09.639 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71063 00:06:09.640 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71063 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71063 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71063 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71063 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71063 ']' 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.898 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71063) - No such process 00:06:09.898 ERROR: process (pid: 71063) is no longer running 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.898 00:06:09.898 real 0m1.410s 00:06:09.898 user 0m1.470s 00:06:09.898 sys 0m0.413s 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.898 05:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.898 ************************************ 00:06:09.898 END TEST default_locks 00:06:09.898 ************************************ 00:06:09.898 05:58:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.898 05:58:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.898 05:58:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.898 05:58:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.898 ************************************ 00:06:09.898 START TEST default_locks_via_rpc 00:06:09.898 ************************************ 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71110 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71110 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71110 ']' 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.898 05:58:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.155 [2024-10-01 05:58:35.583765] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:10.156 [2024-10-01 05:58:35.583903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71110 ] 00:06:10.156 [2024-10-01 05:58:35.722669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.156 [2024-10-01 05:58:35.765806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.858 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.859 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.859 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.859 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.859 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.859 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71110 00:06:10.859 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71110 00:06:10.859 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71110 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71110 ']' 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71110 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71110 00:06:11.120 killing process with pid 71110 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71110' 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71110 00:06:11.120 05:58:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71110 00:06:11.692 00:06:11.692 real 0m1.534s 00:06:11.692 user 0m1.529s 00:06:11.692 sys 0m0.461s 00:06:11.692 05:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.692 ************************************ 00:06:11.692 END TEST default_locks_via_rpc 00:06:11.692 ************************************ 00:06:11.692 05:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.692 05:58:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.692 05:58:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.692 05:58:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.692 05:58:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.692 ************************************ 00:06:11.692 START TEST non_locking_app_on_locked_coremask 00:06:11.692 ************************************ 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71157 00:06:11.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71157 /var/tmp/spdk.sock 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71157 ']' 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.692 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.692 [2024-10-01 05:58:37.149933] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:11.692 [2024-10-01 05:58:37.150324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71157 ] 00:06:11.692 [2024-10-01 05:58:37.282732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.953 [2024-10-01 05:58:37.325452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71173 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71173 /var/tmp/spdk2.sock 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71173 ']' 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.526 05:58:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.526 [2024-10-01 05:58:38.049717] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:12.526 [2024-10-01 05:58:38.050040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71173 ] 00:06:12.786 [2024-10-01 05:58:38.191421] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.786 [2024-10-01 05:58:38.191482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.786 [2024-10-01 05:58:38.273323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.352 05:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.352 05:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.352 05:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71157 00:06:13.352 05:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71157 00:06:13.352 05:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.608 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71157 00:06:13.608 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71157 ']' 00:06:13.608 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71157 00:06:13.608 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.608 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.608 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71157 00:06:13.866 killing process with pid 71157 00:06:13.866 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.866 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.866 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71157' 00:06:13.866 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71157 00:06:13.866 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71157 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71173 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71173 ']' 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71173 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71173 00:06:14.432 killing process with pid 71173 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71173' 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71173 00:06:14.432 05:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71173 00:06:14.690 ************************************ 00:06:14.690 END TEST non_locking_app_on_locked_coremask 00:06:14.690 ************************************ 00:06:14.690 00:06:14.690 real 0m3.117s 00:06:14.690 user 0m3.329s 00:06:14.690 sys 0m0.868s 00:06:14.690 05:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.690 05:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.690 05:58:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.690 05:58:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.690 05:58:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.690 05:58:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.690 ************************************ 00:06:14.690 START TEST locking_app_on_unlocked_coremask 00:06:14.690 ************************************ 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71231 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71231 /var/tmp/spdk.sock 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71231 ']' 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.690 05:58:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.947 [2024-10-01 05:58:40.310020] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:14.947 [2024-10-01 05:58:40.310269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71231 ] 00:06:14.947 [2024-10-01 05:58:40.439686] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.947 [2024-10-01 05:58:40.439727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.947 [2024-10-01 05:58:40.480667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71247 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71247 /var/tmp/spdk2.sock 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71247 ']' 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.881 05:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.881 [2024-10-01 05:58:41.216882] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:15.881 [2024-10-01 05:58:41.217171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71247 ] 00:06:15.881 [2024-10-01 05:58:41.353369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.881 [2024-10-01 05:58:41.434111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.447 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.447 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.447 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71247 00:06:16.447 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71247 00:06:16.447 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71231 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71231 ']' 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71231 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71231 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.011 killing process with pid 71231 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71231' 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71231 00:06:17.011 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71231 00:06:17.577 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71247 00:06:17.577 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71247 ']' 00:06:17.577 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71247 00:06:17.577 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.577 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.577 05:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71247 00:06:17.577 killing process with pid 71247 00:06:17.577 05:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.577 05:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.577 05:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71247' 00:06:17.577 05:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71247 00:06:17.577 05:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71247 00:06:17.835 00:06:17.835 real 0m3.095s 00:06:17.835 user 0m3.310s 00:06:17.835 sys 0m0.844s 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.835 ************************************ 00:06:17.835 END TEST locking_app_on_unlocked_coremask 00:06:17.835 ************************************ 00:06:17.835 05:58:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:17.835 05:58:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.835 05:58:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.835 05:58:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.835 ************************************ 00:06:17.835 START TEST locking_app_on_locked_coremask 00:06:17.835 ************************************ 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:17.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71305 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71305 /var/tmp/spdk.sock 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71305 ']' 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.835 05:58:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.093 [2024-10-01 05:58:43.472279] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:18.093 [2024-10-01 05:58:43.472546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71305 ] 00:06:18.093 [2024-10-01 05:58:43.606645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.093 [2024-10-01 05:58:43.648964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71321 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71321 /var/tmp/spdk2.sock 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71321 /var/tmp/spdk2.sock 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71321 /var/tmp/spdk2.sock 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71321 ']' 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.057 05:58:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.057 [2024-10-01 05:58:44.386549] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:19.057 [2024-10-01 05:58:44.386666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71321 ] 00:06:19.057 [2024-10-01 05:58:44.528610] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71305 has claimed it. 00:06:19.057 [2024-10-01 05:58:44.528701] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.634 ERROR: process (pid: 71321) is no longer running 00:06:19.634 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71321) - No such process 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71305 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71305 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71305 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71305 ']' 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71305 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71305 00:06:19.634 killing process with pid 71305 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71305' 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71305 00:06:19.634 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71305 00:06:20.201 ************************************ 00:06:20.201 END TEST locking_app_on_locked_coremask 00:06:20.201 ************************************ 00:06:20.201 00:06:20.201 real 0m2.163s 00:06:20.201 user 0m2.365s 00:06:20.201 sys 0m0.552s 00:06:20.201 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.201 05:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.201 05:58:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:20.201 05:58:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.201 05:58:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.201 05:58:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.201 ************************************ 00:06:20.201 START TEST locking_overlapped_coremask 00:06:20.201 ************************************ 00:06:20.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71363 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71363 /var/tmp/spdk.sock 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71363 ']' 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.201 05:58:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:20.201 [2024-10-01 05:58:45.706278] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:20.201 [2024-10-01 05:58:45.706547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71363 ] 00:06:20.461 [2024-10-01 05:58:45.842599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.461 [2024-10-01 05:58:45.889539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.461 [2024-10-01 05:58:45.889890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.461 [2024-10-01 05:58:45.889930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71381 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71381 /var/tmp/spdk2.sock 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71381 /var/tmp/spdk2.sock 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71381 /var/tmp/spdk2.sock 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71381 ']' 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.031 05:58:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.291 [2024-10-01 05:58:46.684589] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:21.291 [2024-10-01 05:58:46.684724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:06:21.291 [2024-10-01 05:58:46.830173] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71363 has claimed it. 00:06:21.291 [2024-10-01 05:58:46.830232] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.862 ERROR: process (pid: 71381) is no longer running 00:06:21.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71381) - No such process 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71363 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71363 ']' 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71363 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71363 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71363' 00:06:21.862 killing process with pid 71363 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71363 00:06:21.862 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71363 00:06:22.119 00:06:22.119 real 0m1.957s 00:06:22.119 user 0m5.333s 00:06:22.119 sys 0m0.447s 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.119 ************************************ 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.119 END TEST locking_overlapped_coremask 00:06:22.119 ************************************ 00:06:22.119 05:58:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.119 05:58:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.119 05:58:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.119 05:58:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.119 ************************************ 00:06:22.119 START TEST locking_overlapped_coremask_via_rpc 00:06:22.119 ************************************ 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71423 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71423 /var/tmp/spdk.sock 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71423 ']' 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.119 05:58:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.119 [2024-10-01 05:58:47.717436] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:22.119 [2024-10-01 05:58:47.717687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71423 ] 00:06:22.376 [2024-10-01 05:58:47.853128] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.376 [2024-10-01 05:58:47.853180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.376 [2024-10-01 05:58:47.889496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.376 [2024-10-01 05:58:47.889720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.376 [2024-10-01 05:58:47.889803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71441 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71441 /var/tmp/spdk2.sock 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71441 ']' 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.943 05:58:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.203 [2024-10-01 05:58:48.645281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:23.203 [2024-10-01 05:58:48.645670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71441 ] 00:06:23.203 [2024-10-01 05:58:48.797272] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.203 [2024-10-01 05:58:48.797318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.466 [2024-10-01 05:58:48.862917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.466 [2024-10-01 05:58:48.865923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.466 [2024-10-01 05:58:48.865992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.038 [2024-10-01 05:58:49.525979] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71423 has claimed it. 00:06:24.038 request: 00:06:24.038 { 00:06:24.038 "method": "framework_enable_cpumask_locks", 00:06:24.038 "req_id": 1 00:06:24.038 } 00:06:24.038 Got JSON-RPC error response 00:06:24.038 response: 00:06:24.038 { 00:06:24.038 "code": -32603, 00:06:24.038 "message": "Failed to claim CPU core: 2" 00:06:24.038 } 00:06:24.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71423 /var/tmp/spdk.sock 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71423 ']' 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.038 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71441 /var/tmp/spdk2.sock 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71441 ']' 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.298 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.558 ************************************ 00:06:24.558 END TEST locking_overlapped_coremask_via_rpc 00:06:24.558 ************************************ 00:06:24.558 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.558 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.558 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.558 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.558 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.558 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.558 00:06:24.558 real 0m2.311s 00:06:24.558 user 0m1.106s 00:06:24.558 sys 0m0.132s 00:06:24.558 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.558 05:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.558 05:58:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.558 05:58:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71423 ]] 00:06:24.558 05:58:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71423 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71423 ']' 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71423 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71423 00:06:24.558 killing process with pid 71423 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71423' 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71423 00:06:24.558 05:58:49 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71423 00:06:24.818 05:58:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71441 ]] 00:06:24.818 05:58:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71441 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71441 ']' 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71441 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71441 00:06:24.818 killing process with pid 71441 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71441' 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71441 00:06:24.818 05:58:50 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71441 00:06:25.079 05:58:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.079 Process with pid 71423 is not found 00:06:25.079 05:58:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:25.079 05:58:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71423 ]] 00:06:25.079 05:58:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71423 00:06:25.079 05:58:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71423 ']' 00:06:25.079 05:58:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71423 00:06:25.079 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71423) - No such process 00:06:25.079 05:58:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71423 is not found' 00:06:25.079 05:58:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71441 ]] 00:06:25.079 Process with pid 71441 is not found 00:06:25.079 05:58:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71441 00:06:25.079 05:58:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71441 ']' 00:06:25.079 05:58:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71441 00:06:25.079 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71441) - No such process 00:06:25.079 05:58:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71441 is not found' 00:06:25.079 05:58:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.079 00:06:25.079 real 0m16.706s 00:06:25.079 user 0m29.061s 00:06:25.079 sys 0m4.447s 00:06:25.079 05:58:50 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.079 05:58:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.079 ************************************ 00:06:25.079 END TEST cpu_locks 00:06:25.079 ************************************ 00:06:25.079 ************************************ 00:06:25.079 END TEST event 00:06:25.079 ************************************ 00:06:25.079 00:06:25.079 real 0m42.467s 00:06:25.079 user 1m22.248s 00:06:25.079 sys 0m7.172s 00:06:25.079 05:58:50 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.079 05:58:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.340 05:58:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:25.340 05:58:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.340 05:58:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.340 05:58:50 -- common/autotest_common.sh@10 -- # set +x 00:06:25.340 ************************************ 00:06:25.340 START TEST thread 00:06:25.340 ************************************ 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:25.340 * Looking for test storage... 00:06:25.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:25.340 05:58:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.340 05:58:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.340 05:58:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.340 05:58:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.340 05:58:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.340 05:58:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.340 05:58:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.340 05:58:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.340 05:58:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.340 05:58:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.340 05:58:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.340 05:58:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:25.340 05:58:50 thread -- scripts/common.sh@345 -- # : 1 00:06:25.340 05:58:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.340 05:58:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.340 05:58:50 thread -- scripts/common.sh@365 -- # decimal 1 00:06:25.340 05:58:50 thread -- scripts/common.sh@353 -- # local d=1 00:06:25.340 05:58:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.340 05:58:50 thread -- scripts/common.sh@355 -- # echo 1 00:06:25.340 05:58:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.340 05:58:50 thread -- scripts/common.sh@366 -- # decimal 2 00:06:25.340 05:58:50 thread -- scripts/common.sh@353 -- # local d=2 00:06:25.340 05:58:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.340 05:58:50 thread -- scripts/common.sh@355 -- # echo 2 00:06:25.340 05:58:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.340 05:58:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.340 05:58:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.340 05:58:50 thread -- scripts/common.sh@368 -- # return 0 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.340 --rc genhtml_branch_coverage=1 00:06:25.340 --rc genhtml_function_coverage=1 00:06:25.340 --rc genhtml_legend=1 00:06:25.340 --rc geninfo_all_blocks=1 00:06:25.340 --rc geninfo_unexecuted_blocks=1 00:06:25.340 00:06:25.340 ' 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.340 --rc genhtml_branch_coverage=1 00:06:25.340 --rc genhtml_function_coverage=1 00:06:25.340 --rc genhtml_legend=1 00:06:25.340 --rc geninfo_all_blocks=1 00:06:25.340 --rc geninfo_unexecuted_blocks=1 00:06:25.340 00:06:25.340 ' 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.340 --rc genhtml_branch_coverage=1 00:06:25.340 --rc genhtml_function_coverage=1 00:06:25.340 --rc genhtml_legend=1 00:06:25.340 --rc geninfo_all_blocks=1 00:06:25.340 --rc geninfo_unexecuted_blocks=1 00:06:25.340 00:06:25.340 ' 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.340 --rc genhtml_branch_coverage=1 00:06:25.340 --rc genhtml_function_coverage=1 00:06:25.340 --rc genhtml_legend=1 00:06:25.340 --rc geninfo_all_blocks=1 00:06:25.340 --rc geninfo_unexecuted_blocks=1 00:06:25.340 00:06:25.340 ' 00:06:25.340 05:58:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.340 05:58:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.340 ************************************ 00:06:25.340 START TEST thread_poller_perf 00:06:25.340 ************************************ 00:06:25.340 05:58:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.340 [2024-10-01 05:58:50.953440] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:25.340 [2024-10-01 05:58:50.953752] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71568 ] 00:06:25.599 [2024-10-01 05:58:51.089922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.599 [2024-10-01 05:58:51.147791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.599 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.980 ====================================== 00:06:26.980 busy:2614145952 (cyc) 00:06:26.980 total_run_count: 304000 00:06:26.980 tsc_hz: 2600000000 (cyc) 00:06:26.980 ====================================== 00:06:26.980 poller_cost: 8599 (cyc), 3307 (nsec) 00:06:26.980 00:06:26.980 ************************************ 00:06:26.980 END TEST thread_poller_perf 00:06:26.980 ************************************ 00:06:26.980 real 0m1.289s 00:06:26.980 user 0m1.107s 00:06:26.980 sys 0m0.075s 00:06:26.980 05:58:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.980 05:58:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.980 05:58:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.980 05:58:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:26.980 05:58:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.980 05:58:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.980 ************************************ 00:06:26.980 START TEST thread_poller_perf 00:06:26.980 ************************************ 00:06:26.980 05:58:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.980 [2024-10-01 05:58:52.283534] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:26.980 [2024-10-01 05:58:52.283798] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71610 ] 00:06:26.980 [2024-10-01 05:58:52.419323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.980 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.980 [2024-10-01 05:58:52.452548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.936 ====================================== 00:06:27.936 busy:2603607518 (cyc) 00:06:27.936 total_run_count: 3961000 00:06:27.936 tsc_hz: 2600000000 (cyc) 00:06:27.936 ====================================== 00:06:27.936 poller_cost: 657 (cyc), 252 (nsec) 00:06:27.936 00:06:27.936 real 0m1.285s 00:06:27.936 user 0m1.121s 00:06:27.936 sys 0m0.057s 00:06:27.936 ************************************ 00:06:27.936 END TEST thread_poller_perf 00:06:27.936 ************************************ 00:06:27.936 05:58:53 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.936 05:58:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.196 05:58:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:28.196 ************************************ 00:06:28.196 END TEST thread 00:06:28.196 ************************************ 00:06:28.196 00:06:28.196 real 0m2.850s 00:06:28.196 user 0m2.349s 00:06:28.196 sys 0m0.264s 00:06:28.196 05:58:53 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.196 05:58:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.196 05:58:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:28.196 05:58:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:28.196 05:58:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.196 05:58:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.196 05:58:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.196 ************************************ 00:06:28.196 START TEST app_cmdline 00:06:28.196 ************************************ 00:06:28.196 05:58:53 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:28.196 * Looking for test storage... 00:06:28.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:28.196 05:58:53 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.196 05:58:53 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.196 05:58:53 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.196 05:58:53 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:28.197 05:58:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.456 05:58:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:28.456 05:58:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:28.456 05:58:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.456 05:58:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:28.456 05:58:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.456 05:58:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.456 05:58:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.456 05:58:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.456 --rc genhtml_branch_coverage=1 00:06:28.456 --rc genhtml_function_coverage=1 00:06:28.456 --rc genhtml_legend=1 00:06:28.456 --rc geninfo_all_blocks=1 00:06:28.456 --rc geninfo_unexecuted_blocks=1 00:06:28.456 00:06:28.456 ' 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.456 --rc genhtml_branch_coverage=1 00:06:28.456 --rc genhtml_function_coverage=1 00:06:28.456 --rc genhtml_legend=1 00:06:28.456 --rc geninfo_all_blocks=1 00:06:28.456 --rc geninfo_unexecuted_blocks=1 00:06:28.456 00:06:28.456 ' 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.456 --rc genhtml_branch_coverage=1 00:06:28.456 --rc genhtml_function_coverage=1 00:06:28.456 --rc genhtml_legend=1 00:06:28.456 --rc geninfo_all_blocks=1 00:06:28.456 --rc geninfo_unexecuted_blocks=1 00:06:28.456 00:06:28.456 ' 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.456 --rc genhtml_branch_coverage=1 00:06:28.456 --rc genhtml_function_coverage=1 00:06:28.456 --rc genhtml_legend=1 00:06:28.456 --rc geninfo_all_blocks=1 00:06:28.456 --rc geninfo_unexecuted_blocks=1 00:06:28.456 00:06:28.456 ' 00:06:28.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.456 05:58:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:28.456 05:58:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71688 00:06:28.456 05:58:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71688 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71688 ']' 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.456 05:58:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.456 05:58:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:28.456 [2024-10-01 05:58:53.898965] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:28.456 [2024-10-01 05:58:53.899111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71688 ] 00:06:28.456 [2024-10-01 05:58:54.032090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.715 [2024-10-01 05:58:54.081638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.282 05:58:54 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.282 05:58:54 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:29.282 05:58:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:29.540 { 00:06:29.540 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:29.540 "fields": { 00:06:29.540 "major": 25, 00:06:29.540 "minor": 1, 00:06:29.540 "patch": 0, 00:06:29.540 "suffix": "-pre", 00:06:29.540 "commit": "09cc66129" 00:06:29.540 } 00:06:29.540 } 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:29.540 05:58:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:29.540 05:58:54 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.799 request: 00:06:29.799 { 00:06:29.799 "method": "env_dpdk_get_mem_stats", 00:06:29.799 "req_id": 1 00:06:29.799 } 00:06:29.799 Got JSON-RPC error response 00:06:29.799 response: 00:06:29.799 { 00:06:29.799 "code": -32601, 00:06:29.799 "message": "Method not found" 00:06:29.799 } 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.799 05:58:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71688 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71688 ']' 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71688 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71688 00:06:29.799 killing process with pid 71688 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71688' 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@969 -- # kill 71688 00:06:29.799 05:58:55 app_cmdline -- common/autotest_common.sh@974 -- # wait 71688 00:06:30.057 00:06:30.057 real 0m1.848s 00:06:30.057 user 0m2.209s 00:06:30.057 sys 0m0.426s 00:06:30.057 ************************************ 00:06:30.057 END TEST app_cmdline 00:06:30.057 ************************************ 00:06:30.057 05:58:55 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.057 05:58:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.057 05:58:55 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:30.057 05:58:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.057 05:58:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.057 05:58:55 -- common/autotest_common.sh@10 -- # set +x 00:06:30.057 ************************************ 00:06:30.057 START TEST version 00:06:30.057 ************************************ 00:06:30.057 05:58:55 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:30.057 * Looking for test storage... 00:06:30.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:30.057 05:58:55 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.057 05:58:55 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.057 05:58:55 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:30.057 05:58:55 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:30.057 05:58:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.057 05:58:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.057 05:58:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.057 05:58:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.057 05:58:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.057 05:58:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.057 05:58:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.057 05:58:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.057 05:58:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.057 05:58:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.057 05:58:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.057 05:58:55 version -- scripts/common.sh@344 -- # case "$op" in 00:06:30.057 05:58:55 version -- scripts/common.sh@345 -- # : 1 00:06:30.057 05:58:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.057 05:58:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.057 05:58:55 version -- scripts/common.sh@365 -- # decimal 1 00:06:30.316 05:58:55 version -- scripts/common.sh@353 -- # local d=1 00:06:30.316 05:58:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.316 05:58:55 version -- scripts/common.sh@355 -- # echo 1 00:06:30.316 05:58:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.316 05:58:55 version -- scripts/common.sh@366 -- # decimal 2 00:06:30.316 05:58:55 version -- scripts/common.sh@353 -- # local d=2 00:06:30.316 05:58:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.316 05:58:55 version -- scripts/common.sh@355 -- # echo 2 00:06:30.316 05:58:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.316 05:58:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.316 05:58:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.316 05:58:55 version -- scripts/common.sh@368 -- # return 0 00:06:30.316 05:58:55 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.316 05:58:55 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:30.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.316 --rc genhtml_branch_coverage=1 00:06:30.316 --rc genhtml_function_coverage=1 00:06:30.316 --rc genhtml_legend=1 00:06:30.316 --rc geninfo_all_blocks=1 00:06:30.316 --rc geninfo_unexecuted_blocks=1 00:06:30.316 00:06:30.316 ' 00:06:30.316 05:58:55 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:30.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.316 --rc genhtml_branch_coverage=1 00:06:30.316 --rc genhtml_function_coverage=1 00:06:30.316 --rc genhtml_legend=1 00:06:30.316 --rc geninfo_all_blocks=1 00:06:30.316 --rc geninfo_unexecuted_blocks=1 00:06:30.316 00:06:30.316 ' 00:06:30.316 05:58:55 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:30.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.316 --rc genhtml_branch_coverage=1 00:06:30.316 --rc genhtml_function_coverage=1 00:06:30.316 --rc genhtml_legend=1 00:06:30.316 --rc geninfo_all_blocks=1 00:06:30.316 --rc geninfo_unexecuted_blocks=1 00:06:30.316 00:06:30.316 ' 00:06:30.316 05:58:55 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:30.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.316 --rc genhtml_branch_coverage=1 00:06:30.316 --rc genhtml_function_coverage=1 00:06:30.316 --rc genhtml_legend=1 00:06:30.316 --rc geninfo_all_blocks=1 00:06:30.316 --rc geninfo_unexecuted_blocks=1 00:06:30.316 00:06:30.316 ' 00:06:30.316 05:58:55 version -- app/version.sh@17 -- # get_header_version major 00:06:30.316 05:58:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.316 05:58:55 version -- app/version.sh@14 -- # cut -f2 00:06:30.316 05:58:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.316 05:58:55 version -- app/version.sh@17 -- # major=25 00:06:30.316 05:58:55 version -- app/version.sh@18 -- # get_header_version minor 00:06:30.316 05:58:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.316 05:58:55 version -- app/version.sh@14 -- # cut -f2 00:06:30.316 05:58:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.316 05:58:55 version -- app/version.sh@18 -- # minor=1 00:06:30.316 05:58:55 version -- app/version.sh@19 -- # get_header_version patch 00:06:30.316 05:58:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.316 05:58:55 version -- app/version.sh@14 -- # cut -f2 00:06:30.316 05:58:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.316 05:58:55 version -- app/version.sh@19 -- # patch=0 00:06:30.316 05:58:55 version -- app/version.sh@20 -- # get_header_version suffix 00:06:30.316 05:58:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.316 05:58:55 version -- app/version.sh@14 -- # cut -f2 00:06:30.316 05:58:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.316 05:58:55 version -- app/version.sh@20 -- # suffix=-pre 00:06:30.316 05:58:55 version -- app/version.sh@22 -- # version=25.1 00:06:30.316 05:58:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:30.316 05:58:55 version -- app/version.sh@28 -- # version=25.1rc0 00:06:30.316 05:58:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:30.317 05:58:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:30.317 05:58:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:30.317 05:58:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:30.317 ************************************ 00:06:30.317 END TEST version 00:06:30.317 ************************************ 00:06:30.317 00:06:30.317 real 0m0.181s 00:06:30.317 user 0m0.111s 00:06:30.317 sys 0m0.093s 00:06:30.317 05:58:55 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.317 05:58:55 version -- common/autotest_common.sh@10 -- # set +x 00:06:30.317 05:58:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:30.317 05:58:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:30.317 05:58:55 -- spdk/autotest.sh@194 -- # uname -s 00:06:30.317 05:58:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:30.317 05:58:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:30.317 05:58:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:30.317 05:58:55 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:30.317 05:58:55 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:30.317 05:58:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:30.317 05:58:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.317 05:58:55 -- common/autotest_common.sh@10 -- # set +x 00:06:30.317 ************************************ 00:06:30.317 START TEST blockdev_nvme 00:06:30.317 ************************************ 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:30.317 * Looking for test storage... 00:06:30.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.317 05:58:55 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:30.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.317 --rc genhtml_branch_coverage=1 00:06:30.317 --rc genhtml_function_coverage=1 00:06:30.317 --rc genhtml_legend=1 00:06:30.317 --rc geninfo_all_blocks=1 00:06:30.317 --rc geninfo_unexecuted_blocks=1 00:06:30.317 00:06:30.317 ' 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:30.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.317 --rc genhtml_branch_coverage=1 00:06:30.317 --rc genhtml_function_coverage=1 00:06:30.317 --rc genhtml_legend=1 00:06:30.317 --rc geninfo_all_blocks=1 00:06:30.317 --rc geninfo_unexecuted_blocks=1 00:06:30.317 00:06:30.317 ' 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:30.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.317 --rc genhtml_branch_coverage=1 00:06:30.317 --rc genhtml_function_coverage=1 00:06:30.317 --rc genhtml_legend=1 00:06:30.317 --rc geninfo_all_blocks=1 00:06:30.317 --rc geninfo_unexecuted_blocks=1 00:06:30.317 00:06:30.317 ' 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:30.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.317 --rc genhtml_branch_coverage=1 00:06:30.317 --rc genhtml_function_coverage=1 00:06:30.317 --rc genhtml_legend=1 00:06:30.317 --rc geninfo_all_blocks=1 00:06:30.317 --rc geninfo_unexecuted_blocks=1 00:06:30.317 00:06:30.317 ' 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:30.317 05:58:55 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71849 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71849 00:06:30.317 05:58:55 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 71849 ']' 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.317 05:58:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.576 [2024-10-01 05:58:55.993380] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:30.576 [2024-10-01 05:58:55.993686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71849 ] 00:06:30.576 [2024-10-01 05:58:56.125717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.576 [2024-10-01 05:58:56.159991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.510 05:58:56 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.510 05:58:56 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:31.510 05:58:56 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:31.510 05:58:56 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:31.510 05:58:56 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:31.510 05:58:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:31.510 05:58:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:31.510 05:58:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:31.510 05:58:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.510 05:58:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.769 05:58:57 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:31.769 05:58:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:31.770 05:58:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4e973a2a-e17d-4943-a6a7-0ef9b1b329e6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4e973a2a-e17d-4943-a6a7-0ef9b1b329e6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d4c44a30-7819-4e95-947d-8896c4f81e75"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d4c44a30-7819-4e95-947d-8896c4f81e75",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f7371d53-82f2-46ee-b5d7-bdc88dd7dc48"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f7371d53-82f2-46ee-b5d7-bdc88dd7dc48",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4a51e656-4fb8-4e73-a492-6922c52a1b84"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a51e656-4fb8-4e73-a492-6922c52a1b84",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ef90ee92-a818-4a88-9e60-036dccff3db5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ef90ee92-a818-4a88-9e60-036dccff3db5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f4478eec-91d1-4fe4-a1ef-2917c5378fbe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f4478eec-91d1-4fe4-a1ef-2917c5378fbe",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:31.770 05:58:57 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:31.770 05:58:57 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:31.770 05:58:57 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:31.770 05:58:57 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71849 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 71849 ']' 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 71849 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71849 00:06:31.770 killing process with pid 71849 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71849' 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 71849 00:06:31.770 05:58:57 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 71849 00:06:32.028 05:58:57 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:32.028 05:58:57 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:32.028 05:58:57 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:32.028 05:58:57 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.028 05:58:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.028 ************************************ 00:06:32.028 START TEST bdev_hello_world 00:06:32.028 ************************************ 00:06:32.028 05:58:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:32.287 [2024-10-01 05:58:57.644903] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:32.287 [2024-10-01 05:58:57.645107] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71922 ] 00:06:32.287 [2024-10-01 05:58:57.779803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.287 [2024-10-01 05:58:57.814133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.853 [2024-10-01 05:58:58.183573] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:32.853 [2024-10-01 05:58:58.183740] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:32.853 [2024-10-01 05:58:58.183774] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:32.853 [2024-10-01 05:58:58.185857] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:32.853 [2024-10-01 05:58:58.186726] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:32.853 [2024-10-01 05:58:58.186880] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:32.853 [2024-10-01 05:58:58.187144] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:32.853 00:06:32.853 [2024-10-01 05:58:58.187177] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:32.853 ************************************ 00:06:32.853 END TEST bdev_hello_world 00:06:32.853 ************************************ 00:06:32.853 00:06:32.853 real 0m0.740s 00:06:32.853 user 0m0.497s 00:06:32.853 sys 0m0.139s 00:06:32.853 05:58:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.853 05:58:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:32.853 05:58:58 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:32.853 05:58:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:32.853 05:58:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.853 05:58:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.853 ************************************ 00:06:32.853 START TEST bdev_bounds 00:06:32.853 ************************************ 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71953 00:06:32.853 Process bdevio pid: 71953 00:06:32.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71953' 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71953 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71953 ']' 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.853 05:58:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:32.853 [2024-10-01 05:58:58.431107] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:32.853 [2024-10-01 05:58:58.431337] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71953 ] 00:06:33.111 [2024-10-01 05:58:58.566303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.111 [2024-10-01 05:58:58.601584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.111 [2024-10-01 05:58:58.601876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.111 [2024-10-01 05:58:58.601950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.677 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.677 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:33.677 05:58:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:33.936 I/O targets: 00:06:33.936 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:33.936 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:33.936 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:33.936 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:33.936 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:33.936 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:33.936 00:06:33.936 00:06:33.936 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.936 http://cunit.sourceforge.net/ 00:06:33.936 00:06:33.936 00:06:33.936 Suite: bdevio tests on: Nvme3n1 00:06:33.936 Test: blockdev write read block ...passed 00:06:33.936 Test: blockdev write zeroes read block ...passed 00:06:33.936 Test: blockdev write zeroes read no split ...passed 00:06:33.936 Test: blockdev write zeroes read split ...passed 00:06:33.936 Test: blockdev write zeroes read split partial ...passed 00:06:33.936 Test: blockdev reset ...[2024-10-01 05:58:59.385648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:33.936 passed 00:06:33.936 Test: blockdev write read 8 blocks ...[2024-10-01 05:58:59.387291] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.936 passed 00:06:33.936 Test: blockdev write read size > 128k ...passed 00:06:33.936 Test: blockdev write read invalid size ...passed 00:06:33.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.936 Test: blockdev write read max offset ...passed 00:06:33.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.936 Test: blockdev writev readv 8 blocks ...passed 00:06:33.936 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.936 Test: blockdev writev readv block ...passed 00:06:33.936 Test: blockdev writev readv size > 128k ...passed 00:06:33.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.936 Test: blockdev comparev and writev ...[2024-10-01 05:58:59.393747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:33.936 Test: blockdev nvme passthru rw ...passed 00:06:33.936 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.936 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c2a0a000 len:0x1000 00:06:33.936 [2024-10-01 05:58:59.393907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.936 [2024-10-01 05:58:59.394383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.936 [2024-10-01 05:58:59.394410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.936 passed 00:06:33.937 Test: blockdev copy ...passed 00:06:33.937 Suite: bdevio tests on: Nvme2n3 00:06:33.937 Test: blockdev write read block ...passed 00:06:33.937 Test: blockdev write zeroes read block ...passed 00:06:33.937 Test: blockdev write zeroes read no split ...passed 00:06:33.937 Test: blockdev write zeroes read split ...passed 00:06:33.937 Test: blockdev write zeroes read split partial ...passed 00:06:33.937 Test: blockdev reset ...[2024-10-01 05:58:59.406866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:33.937 [2024-10-01 05:58:59.408588] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.937 passed 00:06:33.937 Test: blockdev write read 8 blocks ...passed 00:06:33.937 Test: blockdev write read size > 128k ...passed 00:06:33.937 Test: blockdev write read invalid size ...passed 00:06:33.937 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.937 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.937 Test: blockdev write read max offset ...passed 00:06:33.937 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.937 Test: blockdev writev readv 8 blocks ...passed 00:06:33.937 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.937 Test: blockdev writev readv block ...passed 00:06:33.937 Test: blockdev writev readv size > 128k ...passed 00:06:33.937 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.937 Test: blockdev comparev and writev ...[2024-10-01 05:58:59.413617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:33.937 Test: blockdev nvme passthru rw ...passed 00:06:33.937 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.937 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c2a03000 len:0x1000 00:06:33.937 [2024-10-01 05:58:59.413733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.937 [2024-10-01 05:58:59.414165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.937 [2024-10-01 05:58:59.414190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.937 passed 00:06:33.937 Test: blockdev copy ...passed 00:06:33.937 Suite: bdevio tests on: Nvme2n2 00:06:33.937 Test: blockdev write read block ...passed 00:06:33.937 Test: blockdev write zeroes read block ...passed 00:06:33.937 Test: blockdev write zeroes read no split ...passed 00:06:33.937 Test: blockdev write zeroes read split ...passed 00:06:33.937 Test: blockdev write zeroes read split partial ...passed 00:06:33.937 Test: blockdev reset ...[2024-10-01 05:58:59.428742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:33.937 [2024-10-01 05:58:59.431750] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.937 passed 00:06:33.937 Test: blockdev write read 8 blocks ...passed 00:06:33.937 Test: blockdev write read size > 128k ...passed 00:06:33.937 Test: blockdev write read invalid size ...passed 00:06:33.937 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.937 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.937 Test: blockdev write read max offset ...passed 00:06:33.937 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.937 Test: blockdev writev readv 8 blocks ...passed 00:06:33.937 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.937 Test: blockdev writev readv block ...passed 00:06:33.937 Test: blockdev writev readv size > 128k ...passed 00:06:33.937 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.937 Test: blockdev comparev and writev ...[2024-10-01 05:58:59.439586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2a03000 len:0x1000 00:06:33.937 [2024-10-01 05:58:59.439621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.937 passed 00:06:33.937 Test: blockdev nvme passthru rw ...passed 00:06:33.937 Test: blockdev nvme passthru vendor specific ...[2024-10-01 05:58:59.440589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.937 [2024-10-01 05:58:59.440615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.937 passed 00:06:33.937 Test: blockdev nvme admin passthru ...passed 00:06:33.937 Test: blockdev copy ...passed 00:06:33.937 Suite: bdevio tests on: Nvme2n1 00:06:33.937 Test: blockdev write read block ...passed 00:06:33.937 Test: blockdev write zeroes read block ...passed 00:06:33.937 Test: blockdev write zeroes read no split ...passed 00:06:33.937 Test: blockdev write zeroes read split ...passed 00:06:33.937 Test: blockdev write zeroes read split partial ...passed 00:06:33.937 Test: blockdev reset ...[2024-10-01 05:58:59.466435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:33.937 passed 00:06:33.937 Test: blockdev write read 8 blocks ...[2024-10-01 05:58:59.468993] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.937 passed 00:06:33.937 Test: blockdev write read size > 128k ...passed 00:06:33.937 Test: blockdev write read invalid size ...passed 00:06:33.937 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.937 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.937 Test: blockdev write read max offset ...passed 00:06:33.937 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.937 Test: blockdev writev readv 8 blocks ...passed 00:06:33.937 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.937 Test: blockdev writev readv block ...passed 00:06:33.937 Test: blockdev writev readv size > 128k ...passed 00:06:33.937 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.937 Test: blockdev comparev and writev ...[2024-10-01 05:58:59.473781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2a03000 len:0x1000 00:06:33.937 [2024-10-01 05:58:59.473819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.937 passed 00:06:33.937 Test: blockdev nvme passthru rw ...passed 00:06:33.937 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.937 Test: blockdev nvme admin passthru ...[2024-10-01 05:58:59.474300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.937 [2024-10-01 05:58:59.474326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.937 passed 00:06:33.937 Test: blockdev copy ...passed 00:06:33.937 Suite: bdevio tests on: Nvme1n1 00:06:33.937 Test: blockdev write read block ...passed 00:06:33.937 Test: blockdev write zeroes read block ...passed 00:06:33.937 Test: blockdev write zeroes read no split ...passed 00:06:33.937 Test: blockdev write zeroes read split ...passed 00:06:33.937 Test: blockdev write zeroes read split partial ...passed 00:06:33.937 Test: blockdev reset ...[2024-10-01 05:58:59.488262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:33.937 passed 00:06:33.937 Test: blockdev write read 8 blocks ...[2024-10-01 05:58:59.490523] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.937 passed 00:06:33.937 Test: blockdev write read size > 128k ...passed 00:06:33.937 Test: blockdev write read invalid size ...passed 00:06:33.937 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.937 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.937 Test: blockdev write read max offset ...passed 00:06:33.937 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.937 Test: blockdev writev readv 8 blocks ...passed 00:06:33.937 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.937 Test: blockdev writev readv block ...passed 00:06:33.937 Test: blockdev writev readv size > 128k ...passed 00:06:33.937 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.937 Test: blockdev comparev and writev ...[2024-10-01 05:58:59.504105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:33.937 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c2e36000 len:0x1000 00:06:33.938 [2024-10-01 05:58:59.504214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:33.938 passed 00:06:33.938 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.938 Test: blockdev nvme admin passthru ...[2024-10-01 05:58:59.506187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:33.938 [2024-10-01 05:58:59.506219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:33.938 passed 00:06:33.938 Test: blockdev copy ...passed 00:06:33.938 Suite: bdevio tests on: Nvme0n1 00:06:33.938 Test: blockdev write read block ...passed 00:06:33.938 Test: blockdev write zeroes read block ...passed 00:06:33.938 Test: blockdev write zeroes read no split ...passed 00:06:33.938 Test: blockdev write zeroes read split ...passed 00:06:33.938 Test: blockdev write zeroes read split partial ...passed 00:06:33.938 Test: blockdev reset ...[2024-10-01 05:58:59.526498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:33.938 passed 00:06:33.938 Test: blockdev write read 8 blocks ...[2024-10-01 05:58:59.529005] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:33.938 passed 00:06:33.938 Test: blockdev write read size > 128k ...passed 00:06:33.938 Test: blockdev write read invalid size ...passed 00:06:33.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:33.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:33.938 Test: blockdev write read max offset ...passed 00:06:33.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:33.938 Test: blockdev writev readv 8 blocks ...passed 00:06:33.938 Test: blockdev writev readv 30 x 1block ...passed 00:06:33.938 Test: blockdev writev readv block ...passed 00:06:33.938 Test: blockdev writev readv size > 128k ...passed 00:06:33.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:33.938 Test: blockdev comparev and writev ...passed 00:06:33.938 Test: blockdev nvme passthru rw ...[2024-10-01 05:58:59.540611] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:33.938 separate metadata which is not supported yet. 00:06:33.938 passed 00:06:33.938 Test: blockdev nvme passthru vendor specific ...passed 00:06:33.938 Test: blockdev nvme admin passthru ...[2024-10-01 05:58:59.542121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:33.938 [2024-10-01 05:58:59.542156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:33.938 passed 00:06:34.196 Test: blockdev copy ...passed 00:06:34.196 00:06:34.196 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.196 suites 6 6 n/a 0 0 00:06:34.196 tests 138 138 138 0 0 00:06:34.196 asserts 893 893 893 0 n/a 00:06:34.196 00:06:34.196 Elapsed time = 0.426 seconds 00:06:34.196 0 00:06:34.196 05:58:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71953 00:06:34.196 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71953 ']' 00:06:34.196 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71953 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71953 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71953' 00:06:34.197 killing process with pid 71953 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71953 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71953 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:34.197 00:06:34.197 real 0m1.351s 00:06:34.197 user 0m3.448s 00:06:34.197 sys 0m0.229s 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.197 05:58:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:34.197 ************************************ 00:06:34.197 END TEST bdev_bounds 00:06:34.197 ************************************ 00:06:34.197 05:58:59 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:34.197 05:58:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:34.197 05:58:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.197 05:58:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.197 ************************************ 00:06:34.197 START TEST bdev_nbd 00:06:34.197 ************************************ 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71996 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71996 /var/tmp/spdk-nbd.sock 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71996 ']' 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.197 05:58:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:34.456 [2024-10-01 05:58:59.850184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:34.456 [2024-10-01 05:58:59.850707] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.456 [2024-10-01 05:58:59.988755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.456 [2024-10-01 05:59:00.023217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.389 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.390 1+0 records in 00:06:35.390 1+0 records out 00:06:35.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000961736 s, 4.3 MB/s 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.390 05:59:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.647 1+0 records in 00:06:35.647 1+0 records out 00:06:35.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900952 s, 4.5 MB/s 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.647 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.904 1+0 records in 00:06:35.904 1+0 records out 00:06:35.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000963267 s, 4.3 MB/s 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:35.904 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:35.905 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:35.905 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.163 1+0 records in 00:06:36.163 1+0 records out 00:06:36.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351457 s, 11.7 MB/s 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.163 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.421 1+0 records in 00:06:36.421 1+0 records out 00:06:36.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101403 s, 4.0 MB/s 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.421 05:59:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.679 1+0 records in 00:06:36.679 1+0 records out 00:06:36.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115893 s, 3.5 MB/s 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd0", 00:06:36.679 "bdev_name": "Nvme0n1" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd1", 00:06:36.679 "bdev_name": "Nvme1n1" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd2", 00:06:36.679 "bdev_name": "Nvme2n1" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd3", 00:06:36.679 "bdev_name": "Nvme2n2" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd4", 00:06:36.679 "bdev_name": "Nvme2n3" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd5", 00:06:36.679 "bdev_name": "Nvme3n1" 00:06:36.679 } 00:06:36.679 ]' 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd0", 00:06:36.679 "bdev_name": "Nvme0n1" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd1", 00:06:36.679 "bdev_name": "Nvme1n1" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd2", 00:06:36.679 "bdev_name": "Nvme2n1" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd3", 00:06:36.679 "bdev_name": "Nvme2n2" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd4", 00:06:36.679 "bdev_name": "Nvme2n3" 00:06:36.679 }, 00:06:36.679 { 00:06:36.679 "nbd_device": "/dev/nbd5", 00:06:36.679 "bdev_name": "Nvme3n1" 00:06:36.679 } 00:06:36.679 ]' 00:06:36.679 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.990 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.247 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.505 05:59:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:37.505 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:37.505 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:37.505 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:37.505 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.505 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.505 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:37.763 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.763 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.763 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.764 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.022 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:38.280 05:59:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:38.539 /dev/nbd0 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.539 1+0 records in 00:06:38.539 1+0 records out 00:06:38.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445073 s, 9.2 MB/s 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:38.539 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:38.797 /dev/nbd1 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.797 1+0 records in 00:06:38.797 1+0 records out 00:06:38.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254958 s, 16.1 MB/s 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:38.797 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:39.056 /dev/nbd10 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.056 1+0 records in 00:06:39.056 1+0 records out 00:06:39.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562827 s, 7.3 MB/s 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.056 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:39.314 /dev/nbd11 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.314 1+0 records in 00:06:39.314 1+0 records out 00:06:39.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483042 s, 8.5 MB/s 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.314 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.315 05:59:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:39.315 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.315 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.315 05:59:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:39.573 /dev/nbd12 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.573 1+0 records in 00:06:39.573 1+0 records out 00:06:39.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486128 s, 8.4 MB/s 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.573 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:39.832 /dev/nbd13 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.832 1+0 records in 00:06:39.832 1+0 records out 00:06:39.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378885 s, 10.8 MB/s 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.832 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd0", 00:06:40.090 "bdev_name": "Nvme0n1" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd1", 00:06:40.090 "bdev_name": "Nvme1n1" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd10", 00:06:40.090 "bdev_name": "Nvme2n1" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd11", 00:06:40.090 "bdev_name": "Nvme2n2" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd12", 00:06:40.090 "bdev_name": "Nvme2n3" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd13", 00:06:40.090 "bdev_name": "Nvme3n1" 00:06:40.090 } 00:06:40.090 ]' 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd0", 00:06:40.090 "bdev_name": "Nvme0n1" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd1", 00:06:40.090 "bdev_name": "Nvme1n1" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd10", 00:06:40.090 "bdev_name": "Nvme2n1" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd11", 00:06:40.090 "bdev_name": "Nvme2n2" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd12", 00:06:40.090 "bdev_name": "Nvme2n3" 00:06:40.090 }, 00:06:40.090 { 00:06:40.090 "nbd_device": "/dev/nbd13", 00:06:40.090 "bdev_name": "Nvme3n1" 00:06:40.090 } 00:06:40.090 ]' 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.090 /dev/nbd1 00:06:40.090 /dev/nbd10 00:06:40.090 /dev/nbd11 00:06:40.090 /dev/nbd12 00:06:40.090 /dev/nbd13' 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.090 /dev/nbd1 00:06:40.090 /dev/nbd10 00:06:40.090 /dev/nbd11 00:06:40.090 /dev/nbd12 00:06:40.090 /dev/nbd13' 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:40.090 256+0 records in 00:06:40.090 256+0 records out 00:06:40.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0087184 s, 120 MB/s 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.090 256+0 records in 00:06:40.090 256+0 records out 00:06:40.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0562275 s, 18.6 MB/s 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.090 256+0 records in 00:06:40.090 256+0 records out 00:06:40.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0545113 s, 19.2 MB/s 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:40.090 256+0 records in 00:06:40.090 256+0 records out 00:06:40.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.054466 s, 19.3 MB/s 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.090 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:40.348 256+0 records in 00:06:40.348 256+0 records out 00:06:40.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0573251 s, 18.3 MB/s 00:06:40.348 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.348 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:40.348 256+0 records in 00:06:40.348 256+0 records out 00:06:40.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0539892 s, 19.4 MB/s 00:06:40.348 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.348 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:40.348 256+0 records in 00:06:40.348 256+0 records out 00:06:40.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0536878 s, 19.5 MB/s 00:06:40.348 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:40.348 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.349 05:59:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.607 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.864 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.864 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.864 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.864 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.864 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.865 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.865 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.865 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.865 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.865 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.123 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:41.381 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:41.639 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:41.639 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:41.639 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.639 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.639 05:59:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.639 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:41.897 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:42.155 malloc_lvol_verify 00:06:42.155 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:42.413 64c87df4-590e-4c03-b932-d35cad12f253 00:06:42.413 05:59:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:42.670 9be49267-f433-4c41-a085-8ce6f09dfc5a 00:06:42.671 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:42.671 /dev/nbd0 00:06:42.671 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:42.671 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:42.671 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:42.671 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:42.671 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:42.671 mke2fs 1.47.0 (5-Feb-2023) 00:06:42.671 Discarding device blocks: 0/4096 done 00:06:42.671 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:42.671 00:06:42.671 Allocating group tables: 0/1 done 00:06:42.671 Writing inode tables: 0/1 done 00:06:42.929 Creating journal (1024 blocks): done 00:06:42.929 Writing superblocks and filesystem accounting information: 0/1 done 00:06:42.929 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71996 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71996 ']' 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71996 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71996 00:06:42.929 killing process with pid 71996 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71996' 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71996 00:06:42.929 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71996 00:06:43.187 05:59:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:43.187 00:06:43.187 real 0m8.959s 00:06:43.187 user 0m13.321s 00:06:43.187 sys 0m2.884s 00:06:43.187 ************************************ 00:06:43.187 END TEST bdev_nbd 00:06:43.187 ************************************ 00:06:43.187 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.187 05:59:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:43.187 05:59:08 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:43.187 05:59:08 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:43.187 skipping fio tests on NVMe due to multi-ns failures. 00:06:43.187 05:59:08 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:43.187 05:59:08 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:43.187 05:59:08 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:43.187 05:59:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:43.187 05:59:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.187 05:59:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.187 ************************************ 00:06:43.187 START TEST bdev_verify 00:06:43.187 ************************************ 00:06:43.187 05:59:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:43.444 [2024-10-01 05:59:08.856050] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:43.444 [2024-10-01 05:59:08.856167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72362 ] 00:06:43.444 [2024-10-01 05:59:08.989225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.444 [2024-10-01 05:59:09.025423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.444 [2024-10-01 05:59:09.025525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.009 Running I/O for 5 seconds... 00:06:49.190 24064.00 IOPS, 94.00 MiB/s 23680.00 IOPS, 92.50 MiB/s 23040.00 IOPS, 90.00 MiB/s 22640.00 IOPS, 88.44 MiB/s 22449.80 IOPS, 87.69 MiB/s 00:06:49.190 Latency(us) 00:06:49.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.190 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x0 length 0xbd0bd 00:06:49.190 Nvme0n1 : 5.05 1823.36 7.12 0.00 0.00 69989.61 14014.62 81869.59 00:06:49.190 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:49.190 Nvme0n1 : 5.05 1876.89 7.33 0.00 0.00 67969.87 13712.15 68964.04 00:06:49.190 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x0 length 0xa0000 00:06:49.190 Nvme1n1 : 5.06 1822.23 7.12 0.00 0.00 69869.98 16736.89 68964.04 00:06:49.190 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0xa0000 length 0xa0000 00:06:49.190 Nvme1n1 : 5.05 1876.31 7.33 0.00 0.00 67885.97 17241.01 60898.07 00:06:49.190 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x0 length 0x80000 00:06:49.190 Nvme2n1 : 5.06 1821.62 7.12 0.00 0.00 69751.23 18047.61 65334.35 00:06:49.190 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x80000 length 0x80000 00:06:49.190 Nvme2n1 : 5.06 1883.34 7.36 0.00 0.00 67533.01 6225.92 59284.87 00:06:49.190 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x0 length 0x80000 00:06:49.190 Nvme2n2 : 5.06 1821.08 7.11 0.00 0.00 69605.53 17845.96 56461.78 00:06:49.190 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x80000 length 0x80000 00:06:49.190 Nvme2n2 : 5.06 1876.25 7.33 0.00 0.00 67673.92 7208.96 74206.92 00:06:49.190 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x0 length 0x80000 00:06:49.190 Nvme2n3 : 5.07 1829.34 7.15 0.00 0.00 69178.52 4461.49 57268.38 00:06:49.190 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x80000 length 0x80000 00:06:49.190 Nvme2n3 : 5.07 1875.11 7.32 0.00 0.00 67585.11 7259.37 71383.83 00:06:49.190 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x0 length 0x20000 00:06:49.190 Nvme3n1 : 5.08 1837.90 7.18 0.00 0.00 68794.53 6956.90 74206.92 00:06:49.190 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:49.190 Verification LBA range: start 0x20000 length 0x20000 00:06:49.190 Nvme3n1 : 5.07 1879.31 7.34 0.00 0.00 67300.34 5520.15 69770.63 00:06:49.190 =================================================================================================================== 00:06:49.190 Total : 22222.74 86.81 0.00 0.00 68580.72 4461.49 81869.59 00:06:49.764 00:06:49.764 real 0m6.530s 00:06:49.764 user 0m12.145s 00:06:49.764 sys 0m0.202s 00:06:49.764 05:59:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.764 ************************************ 00:06:49.764 END TEST bdev_verify 00:06:49.764 05:59:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:49.764 ************************************ 00:06:49.764 05:59:15 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:49.764 05:59:15 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:49.764 05:59:15 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.764 05:59:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:50.026 ************************************ 00:06:50.026 START TEST bdev_verify_big_io 00:06:50.026 ************************************ 00:06:50.026 05:59:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:50.026 [2024-10-01 05:59:15.462705] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:50.026 [2024-10-01 05:59:15.462916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72449 ] 00:06:50.026 [2024-10-01 05:59:15.593837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.286 [2024-10-01 05:59:15.652268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.286 [2024-10-01 05:59:15.652397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.547 Running I/O for 5 seconds... 00:06:56.684 1140.00 IOPS, 71.25 MiB/s 1976.00 IOPS, 123.50 MiB/s 2064.33 IOPS, 129.02 MiB/s 2240.75 IOPS, 140.05 MiB/s 00:06:56.684 Latency(us) 00:06:56.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.684 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x0 length 0xbd0b 00:06:56.684 Nvme0n1 : 5.74 129.38 8.09 0.00 0.00 947701.31 19358.33 1006632.96 00:06:56.684 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:56.684 Nvme0n1 : 5.55 125.31 7.83 0.00 0.00 976493.94 36296.86 987274.63 00:06:56.684 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x0 length 0xa000 00:06:56.684 Nvme1n1 : 5.74 130.22 8.14 0.00 0.00 915555.80 99211.42 813049.70 00:06:56.684 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0xa000 length 0xa000 00:06:56.684 Nvme1n1 : 5.72 130.32 8.14 0.00 0.00 922543.61 112116.97 822728.86 00:06:56.684 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x0 length 0x8000 00:06:56.684 Nvme2n1 : 5.74 131.54 8.22 0.00 0.00 881411.73 68157.44 774333.05 00:06:56.684 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x8000 length 0x8000 00:06:56.684 Nvme2n1 : 5.73 132.89 8.31 0.00 0.00 885955.57 56461.78 935652.43 00:06:56.684 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x0 length 0x8000 00:06:56.684 Nvme2n2 : 5.84 136.36 8.52 0.00 0.00 829554.03 60091.47 1206669.00 00:06:56.684 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x8000 length 0x8000 00:06:56.684 Nvme2n2 : 5.75 138.34 8.65 0.00 0.00 828963.20 24802.86 896935.78 00:06:56.684 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x0 length 0x8000 00:06:56.684 Nvme2n3 : 5.89 138.82 8.68 0.00 0.00 793500.83 31860.58 1742249.35 00:06:56.684 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x8000 length 0x8000 00:06:56.684 Nvme2n3 : 5.80 144.68 9.04 0.00 0.00 767813.82 42749.64 929199.66 00:06:56.684 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x0 length 0x2000 00:06:56.684 Nvme3n1 : 5.89 159.63 9.98 0.00 0.00 672529.72 1386.34 1780966.01 00:06:56.684 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.684 Verification LBA range: start 0x2000 length 0x2000 00:06:56.684 Nvme3n1 : 5.87 163.43 10.21 0.00 0.00 663029.33 1216.20 948557.98 00:06:56.684 =================================================================================================================== 00:06:56.684 Total : 1660.94 103.81 0.00 0.00 831234.78 1216.20 1780966.01 00:06:57.317 00:06:57.317 real 0m7.514s 00:06:57.317 user 0m14.147s 00:06:57.317 sys 0m0.289s 00:06:57.317 05:59:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.317 ************************************ 00:06:57.317 END TEST bdev_verify_big_io 00:06:57.317 05:59:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:57.317 ************************************ 00:06:57.578 05:59:22 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.578 05:59:22 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:57.578 05:59:22 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.578 05:59:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.578 ************************************ 00:06:57.578 START TEST bdev_write_zeroes 00:06:57.578 ************************************ 00:06:57.578 05:59:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.578 [2024-10-01 05:59:23.046966] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:57.578 [2024-10-01 05:59:23.047120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72555 ] 00:06:57.578 [2024-10-01 05:59:23.187458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.839 [2024-10-01 05:59:23.253112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.101 Running I/O for 1 seconds... 00:06:59.482 56448.00 IOPS, 220.50 MiB/s 00:06:59.482 Latency(us) 00:06:59.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.482 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:59.482 Nvme0n1 : 1.02 9398.32 36.71 0.00 0.00 13587.56 4839.58 29037.49 00:06:59.482 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:59.482 Nvme1n1 : 1.02 9387.36 36.67 0.00 0.00 13590.43 9527.93 25004.50 00:06:59.482 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:59.482 Nvme2n1 : 1.02 9376.58 36.63 0.00 0.00 13525.66 9628.75 25407.80 00:06:59.483 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:59.483 Nvme2n2 : 1.03 9365.73 36.58 0.00 0.00 13522.38 9527.93 26617.70 00:06:59.483 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:59.483 Nvme2n3 : 1.03 9354.97 36.54 0.00 0.00 13499.91 8519.68 26416.05 00:06:59.483 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:59.483 Nvme3n1 : 1.03 9344.28 36.50 0.00 0.00 13477.42 7108.14 24601.21 00:06:59.483 =================================================================================================================== 00:06:59.483 Total : 56227.23 219.64 0.00 0.00 13533.89 4839.58 29037.49 00:06:59.483 00:06:59.483 real 0m1.939s 00:06:59.483 user 0m1.606s 00:06:59.483 sys 0m0.217s 00:06:59.483 05:59:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.483 ************************************ 00:06:59.483 END TEST bdev_write_zeroes 00:06:59.483 05:59:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:59.483 ************************************ 00:06:59.483 05:59:24 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.483 05:59:24 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:59.483 05:59:24 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.483 05:59:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.483 ************************************ 00:06:59.483 START TEST bdev_json_nonenclosed 00:06:59.483 ************************************ 00:06:59.483 05:59:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.483 [2024-10-01 05:59:25.041616] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:59.483 [2024-10-01 05:59:25.041748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72597 ] 00:06:59.744 [2024-10-01 05:59:25.177190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.745 [2024-10-01 05:59:25.216311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.745 [2024-10-01 05:59:25.216400] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:59.745 [2024-10-01 05:59:25.216417] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:59.745 [2024-10-01 05:59:25.216428] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.745 00:06:59.745 real 0m0.328s 00:06:59.745 user 0m0.141s 00:06:59.745 sys 0m0.083s 00:06:59.745 05:59:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.745 05:59:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:59.745 ************************************ 00:06:59.745 END TEST bdev_json_nonenclosed 00:06:59.745 ************************************ 00:06:59.745 05:59:25 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.745 05:59:25 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:00.007 05:59:25 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.007 05:59:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.007 ************************************ 00:07:00.007 START TEST bdev_json_nonarray 00:07:00.007 ************************************ 00:07:00.007 05:59:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.007 [2024-10-01 05:59:25.435274] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:00.007 [2024-10-01 05:59:25.435399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72617 ] 00:07:00.007 [2024-10-01 05:59:25.572724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.007 [2024-10-01 05:59:25.609154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.007 [2024-10-01 05:59:25.609250] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:00.007 [2024-10-01 05:59:25.609266] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:00.007 [2024-10-01 05:59:25.609277] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.269 00:07:00.269 real 0m0.327s 00:07:00.269 user 0m0.130s 00:07:00.269 sys 0m0.094s 00:07:00.269 05:59:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.269 05:59:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:00.269 ************************************ 00:07:00.269 END TEST bdev_json_nonarray 00:07:00.269 ************************************ 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:00.269 05:59:25 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:00.269 00:07:00.269 real 0m29.981s 00:07:00.269 user 0m47.392s 00:07:00.269 sys 0m4.821s 00:07:00.269 05:59:25 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.269 ************************************ 00:07:00.269 END TEST blockdev_nvme 00:07:00.269 ************************************ 00:07:00.269 05:59:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.269 05:59:25 -- spdk/autotest.sh@209 -- # uname -s 00:07:00.269 05:59:25 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:00.269 05:59:25 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:00.269 05:59:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:00.269 05:59:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.269 05:59:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.269 ************************************ 00:07:00.269 START TEST blockdev_nvme_gpt 00:07:00.269 ************************************ 00:07:00.269 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:00.531 * Looking for test storage... 00:07:00.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.531 05:59:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.531 --rc genhtml_branch_coverage=1 00:07:00.531 --rc genhtml_function_coverage=1 00:07:00.531 --rc genhtml_legend=1 00:07:00.531 --rc geninfo_all_blocks=1 00:07:00.531 --rc geninfo_unexecuted_blocks=1 00:07:00.531 00:07:00.531 ' 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.531 --rc genhtml_branch_coverage=1 00:07:00.531 --rc genhtml_function_coverage=1 00:07:00.531 --rc genhtml_legend=1 00:07:00.531 --rc geninfo_all_blocks=1 00:07:00.531 --rc geninfo_unexecuted_blocks=1 00:07:00.531 00:07:00.531 ' 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.531 --rc genhtml_branch_coverage=1 00:07:00.531 --rc genhtml_function_coverage=1 00:07:00.531 --rc genhtml_legend=1 00:07:00.531 --rc geninfo_all_blocks=1 00:07:00.531 --rc geninfo_unexecuted_blocks=1 00:07:00.531 00:07:00.531 ' 00:07:00.531 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.531 --rc genhtml_branch_coverage=1 00:07:00.531 --rc genhtml_function_coverage=1 00:07:00.531 --rc genhtml_legend=1 00:07:00.531 --rc geninfo_all_blocks=1 00:07:00.531 --rc geninfo_unexecuted_blocks=1 00:07:00.531 00:07:00.531 ' 00:07:00.531 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.531 05:59:25 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.531 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:00.531 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.531 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72701 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:00.532 05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72701 00:07:00.532 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 72701 ']' 00:07:00.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.532 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.532 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.532 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.532 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.532 05:59:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.532 [2024-10-01 05:59:26.053308] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:00.532 [2024-10-01 05:59:26.053457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72701 ] 00:07:00.792 [2024-10-01 05:59:26.189923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.792 [2024-10-01 05:59:26.243179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.363 05:59:26 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.364 05:59:26 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:01.364 05:59:26 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:01.364 05:59:26 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:01.364 05:59:26 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:01.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:01.887 Waiting for block devices as requested 00:07:01.887 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.887 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.147 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.147 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:07.431 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:07.431 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:07.431 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:07.431 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:07.431 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:07.431 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:07.431 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:07.431 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:07.432 BYT; 00:07:07.432 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:07.432 BYT; 00:07:07.432 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.432 05:59:32 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:07.432 05:59:32 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:08.373 The operation has completed successfully. 00:07:08.373 05:59:33 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:09.313 The operation has completed successfully. 00:07:09.313 05:59:34 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:09.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:10.452 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.452 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.452 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.452 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:10.452 05:59:35 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:10.452 05:59:35 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.452 05:59:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.452 [] 00:07:10.452 05:59:35 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.452 05:59:35 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:10.452 05:59:35 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:10.452 05:59:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:10.452 05:59:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:10.452 05:59:35 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:10.452 05:59:35 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.452 05:59:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.711 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.711 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:10.711 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.711 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.711 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.711 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:10.711 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.711 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:10.711 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.971 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.971 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:10.971 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:10.972 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "223a428b-f1f7-4764-b0d0-480959f49be1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "223a428b-f1f7-4764-b0d0-480959f49be1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ce5c9c70-b6df-4486-be01-cef343d9c295"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce5c9c70-b6df-4486-be01-cef343d9c295",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9fb4661d-c573-44a0-8075-71c955d80b74"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9fb4661d-c573-44a0-8075-71c955d80b74",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "79a857b8-62d3-418b-862d-198ce0294e69"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "79a857b8-62d3-418b-862d-198ce0294e69",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e3a27bdd-7bcc-4386-aabe-516db7e5661d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e3a27bdd-7bcc-4386-aabe-516db7e5661d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:10.972 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:10.972 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:10.972 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:10.972 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72701 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 72701 ']' 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 72701 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72701 00:07:10.972 killing process with pid 72701 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72701' 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 72701 00:07:10.972 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 72701 00:07:11.230 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:11.230 05:59:36 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.230 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:11.230 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.230 05:59:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.230 ************************************ 00:07:11.230 START TEST bdev_hello_world 00:07:11.230 ************************************ 00:07:11.230 05:59:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.230 [2024-10-01 05:59:36.740362] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:11.230 [2024-10-01 05:59:36.740481] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73306 ] 00:07:11.489 [2024-10-01 05:59:36.874769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.489 [2024-10-01 05:59:36.904476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.777 [2024-10-01 05:59:37.262308] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:11.777 [2024-10-01 05:59:37.262465] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:11.777 [2024-10-01 05:59:37.262492] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:11.777 [2024-10-01 05:59:37.264156] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:11.777 [2024-10-01 05:59:37.264627] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:11.777 [2024-10-01 05:59:37.264653] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:11.777 [2024-10-01 05:59:37.264925] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:11.777 00:07:11.777 [2024-10-01 05:59:37.264948] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:12.060 ************************************ 00:07:12.060 00:07:12.060 real 0m0.724s 00:07:12.060 user 0m0.483s 00:07:12.060 sys 0m0.138s 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:12.060 END TEST bdev_hello_world 00:07:12.060 ************************************ 00:07:12.060 05:59:37 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:12.060 05:59:37 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.060 05:59:37 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.060 05:59:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.060 ************************************ 00:07:12.060 START TEST bdev_bounds 00:07:12.060 ************************************ 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73339 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.060 Process bdevio pid: 73339 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73339' 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73339 00:07:12.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73339 ']' 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:12.060 05:59:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.060 [2024-10-01 05:59:37.515631] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:12.060 [2024-10-01 05:59:37.515727] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73339 ] 00:07:12.060 [2024-10-01 05:59:37.647105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.318 [2024-10-01 05:59:37.678119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.318 [2024-10-01 05:59:37.678350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.318 [2024-10-01 05:59:37.678415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.887 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.887 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:12.887 05:59:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:12.887 I/O targets: 00:07:12.887 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:12.887 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:12.887 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:12.887 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.887 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.887 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.887 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:12.887 00:07:12.887 00:07:12.887 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.887 http://cunit.sourceforge.net/ 00:07:12.887 00:07:12.887 00:07:12.887 Suite: bdevio tests on: Nvme3n1 00:07:12.887 Test: blockdev write read block ...passed 00:07:12.887 Test: blockdev write zeroes read block ...passed 00:07:12.887 Test: blockdev write zeroes read no split ...passed 00:07:12.887 Test: blockdev write zeroes read split ...passed 00:07:12.887 Test: blockdev write zeroes read split partial ...passed 00:07:12.887 Test: blockdev reset ...[2024-10-01 05:59:38.481421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:12.887 [2024-10-01 05:59:38.483543] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.887 passed 00:07:12.887 Test: blockdev write read 8 blocks ...passed 00:07:12.887 Test: blockdev write read size > 128k ...passed 00:07:12.887 Test: blockdev write read invalid size ...passed 00:07:12.887 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.887 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.887 Test: blockdev write read max offset ...passed 00:07:12.887 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.887 Test: blockdev writev readv 8 blocks ...passed 00:07:12.887 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.887 Test: blockdev writev readv block ...passed 00:07:12.887 Test: blockdev writev readv size > 128k ...passed 00:07:12.887 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.887 Test: blockdev comparev and writev ...[2024-10-01 05:59:38.489636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bde0a000 len:0x1000 00:07:12.887 [2024-10-01 05:59:38.489778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.887 passed 00:07:12.887 Test: blockdev nvme passthru rw ...passed 00:07:12.887 Test: blockdev nvme passthru vendor specific ...[2024-10-01 05:59:38.490718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.887 [2024-10-01 05:59:38.490865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:07:12.887 00:07:12.887 Test: blockdev nvme admin passthru ...passed 00:07:12.887 Test: blockdev copy ...passed 00:07:12.887 Suite: bdevio tests on: Nvme2n3 00:07:12.887 Test: blockdev write read block ...passed 00:07:12.887 Test: blockdev write zeroes read block ...passed 00:07:12.887 Test: blockdev write zeroes read no split ...passed 00:07:12.887 Test: blockdev write zeroes read split ...passed 00:07:12.887 Test: blockdev write zeroes read split partial ...passed 00:07:12.887 Test: blockdev reset ...[2024-10-01 05:59:38.502774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.146 [2024-10-01 05:59:38.504339] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.146 passed 00:07:13.146 Test: blockdev write read 8 blocks ...passed 00:07:13.146 Test: blockdev write read size > 128k ...passed 00:07:13.146 Test: blockdev write read invalid size ...passed 00:07:13.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.146 Test: blockdev write read max offset ...passed 00:07:13.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.146 Test: blockdev writev readv 8 blocks ...passed 00:07:13.146 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.146 Test: blockdev writev readv block ...passed 00:07:13.146 Test: blockdev writev readv size > 128k ...passed 00:07:13.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.146 Test: blockdev comparev and writev ...[2024-10-01 05:59:38.508729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb004000 len:0x1000 00:07:13.146 [2024-10-01 05:59:38.508771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.146 passed 00:07:13.146 Test: blockdev nvme passthru rw ...passed 00:07:13.146 Test: blockdev nvme passthru vendor specific ...[2024-10-01 05:59:38.509308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:13.146 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:13.147 [2024-10-01 05:59:38.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.147 passed 00:07:13.147 Test: blockdev copy ...passed 00:07:13.147 Suite: bdevio tests on: Nvme2n2 00:07:13.147 Test: blockdev write read block ...passed 00:07:13.147 Test: blockdev write zeroes read block ...passed 00:07:13.147 Test: blockdev write zeroes read no split ...passed 00:07:13.147 Test: blockdev write zeroes read split ...passed 00:07:13.147 Test: blockdev write zeroes read split partial ...passed 00:07:13.147 Test: blockdev reset ...[2024-10-01 05:59:38.523543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.147 passed 00:07:13.147 Test: blockdev write read 8 blocks ...[2024-10-01 05:59:38.525059] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.147 passed 00:07:13.147 Test: blockdev write read size > 128k ...passed 00:07:13.147 Test: blockdev write read invalid size ...passed 00:07:13.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.147 Test: blockdev write read max offset ...passed 00:07:13.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.147 Test: blockdev writev readv 8 blocks ...passed 00:07:13.147 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.147 Test: blockdev writev readv block ...passed 00:07:13.147 Test: blockdev writev readv size > 128k ...passed 00:07:13.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.147 Test: blockdev comparev and writev ...[2024-10-01 05:59:38.529248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb004000 len:0x1000 00:07:13.147 [2024-10-01 05:59:38.529285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.147 passed 00:07:13.147 Test: blockdev nvme passthru rw ...passed 00:07:13.147 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.147 Test: blockdev nvme admin passthru ...[2024-10-01 05:59:38.529869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.147 [2024-10-01 05:59:38.529895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.147 passed 00:07:13.147 Test: blockdev copy ...passed 00:07:13.147 Suite: bdevio tests on: Nvme2n1 00:07:13.147 Test: blockdev write read block ...passed 00:07:13.147 Test: blockdev write zeroes read block ...passed 00:07:13.147 Test: blockdev write zeroes read no split ...passed 00:07:13.147 Test: blockdev write zeroes read split ...passed 00:07:13.147 Test: blockdev write zeroes read split partial ...passed 00:07:13.147 Test: blockdev reset ...[2024-10-01 05:59:38.544559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.147 [2024-10-01 05:59:38.546050] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.147 passed 00:07:13.147 Test: blockdev write read 8 blocks ...passed 00:07:13.147 Test: blockdev write read size > 128k ...passed 00:07:13.147 Test: blockdev write read invalid size ...passed 00:07:13.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.147 Test: blockdev write read max offset ...passed 00:07:13.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.147 Test: blockdev writev readv 8 blocks ...passed 00:07:13.147 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.147 Test: blockdev writev readv block ...passed 00:07:13.147 Test: blockdev writev readv size > 128k ...passed 00:07:13.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.147 Test: blockdev comparev and writev ...[2024-10-01 05:59:38.550585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb006000 len:0x1000 00:07:13.147 [2024-10-01 05:59:38.550622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.147 passed 00:07:13.147 Test: blockdev nvme passthru rw ...passed 00:07:13.147 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.147 Test: blockdev nvme admin passthru ...[2024-10-01 05:59:38.551205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.147 [2024-10-01 05:59:38.551232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.147 passed 00:07:13.147 Test: blockdev copy ...passed 00:07:13.147 Suite: bdevio tests on: Nvme1n1p2 00:07:13.147 Test: blockdev write read block ...passed 00:07:13.147 Test: blockdev write zeroes read block ...passed 00:07:13.147 Test: blockdev write zeroes read no split ...passed 00:07:13.147 Test: blockdev write zeroes read split ...passed 00:07:13.147 Test: blockdev write zeroes read split partial ...passed 00:07:13.147 Test: blockdev reset ...[2024-10-01 05:59:38.566643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:13.147 passed 00:07:13.147 Test: blockdev write read 8 blocks ...[2024-10-01 05:59:38.568064] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.147 passed 00:07:13.147 Test: blockdev write read size > 128k ...passed 00:07:13.147 Test: blockdev write read invalid size ...passed 00:07:13.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.147 Test: blockdev write read max offset ...passed 00:07:13.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.147 Test: blockdev writev readv 8 blocks ...passed 00:07:13.147 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.147 Test: blockdev writev readv block ...passed 00:07:13.147 Test: blockdev writev readv size > 128k ...passed 00:07:13.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.147 Test: blockdev comparev and writev ...[2024-10-01 05:59:38.572887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2bb002000 len:0x1000 00:07:13.147 [2024-10-01 05:59:38.572922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.147 passed 00:07:13.147 Test: blockdev nvme passthru rw ...passed 00:07:13.147 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.147 Test: blockdev nvme admin passthru ...passed 00:07:13.147 Test: blockdev copy ...passed 00:07:13.147 Suite: bdevio tests on: Nvme1n1p1 00:07:13.147 Test: blockdev write read block ...passed 00:07:13.147 Test: blockdev write zeroes read block ...passed 00:07:13.147 Test: blockdev write zeroes read no split ...passed 00:07:13.147 Test: blockdev write zeroes read split ...passed 00:07:13.147 Test: blockdev write zeroes read split partial ...passed 00:07:13.147 Test: blockdev reset ...[2024-10-01 05:59:38.588913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:13.147 [2024-10-01 05:59:38.590224] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.147 passed 00:07:13.147 Test: blockdev write read 8 blocks ...passed 00:07:13.147 Test: blockdev write read size > 128k ...passed 00:07:13.147 Test: blockdev write read invalid size ...passed 00:07:13.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.147 Test: blockdev write read max offset ...passed 00:07:13.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.147 Test: blockdev writev readv 8 blocks ...passed 00:07:13.147 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.147 Test: blockdev writev readv block ...passed 00:07:13.147 Test: blockdev writev readv size > 128k ...passed 00:07:13.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.147 Test: blockdev comparev and writev ...[2024-10-01 05:59:38.594878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c103b000 len:0x1000 00:07:13.147 [2024-10-01 05:59:38.594912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.147 passed 00:07:13.147 Test: blockdev nvme passthru rw ...passed 00:07:13.147 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.147 Test: blockdev nvme admin passthru ...passed 00:07:13.147 Test: blockdev copy ...passed 00:07:13.147 Suite: bdevio tests on: Nvme0n1 00:07:13.147 Test: blockdev write read block ...passed 00:07:13.147 Test: blockdev write zeroes read block ...passed 00:07:13.147 Test: blockdev write zeroes read no split ...passed 00:07:13.147 Test: blockdev write zeroes read split ...passed 00:07:13.147 Test: blockdev write zeroes read split partial ...passed 00:07:13.147 Test: blockdev reset ...[2024-10-01 05:59:38.605561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:13.147 passed 00:07:13.147 Test: blockdev write read 8 blocks ...[2024-10-01 05:59:38.606880] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.147 passed 00:07:13.147 Test: blockdev write read size > 128k ...passed 00:07:13.147 Test: blockdev write read invalid size ...passed 00:07:13.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.147 Test: blockdev write read max offset ...passed 00:07:13.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.147 Test: blockdev writev readv 8 blocks ...passed 00:07:13.147 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.147 Test: blockdev writev readv block ...passed 00:07:13.147 Test: blockdev writev readv size > 128k ...passed 00:07:13.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.147 Test: blockdev comparev and writev ...passed 00:07:13.147 Test: blockdev nvme passthru rw ...[2024-10-01 05:59:38.610177] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:13.147 separate metadata which is not supported yet. 00:07:13.147 passed 00:07:13.147 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.147 Test: blockdev nvme admin passthru ...[2024-10-01 05:59:38.610472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:13.147 [2024-10-01 05:59:38.610508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:13.147 passed 00:07:13.148 Test: blockdev copy ...passed 00:07:13.148 00:07:13.148 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.148 suites 7 7 n/a 0 0 00:07:13.148 tests 161 161 161 0 0 00:07:13.148 asserts 1025 1025 1025 0 n/a 00:07:13.148 00:07:13.148 Elapsed time = 0.359 seconds 00:07:13.148 0 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73339 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73339 ']' 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73339 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73339 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73339' 00:07:13.148 killing process with pid 73339 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73339 00:07:13.148 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73339 00:07:13.406 05:59:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:13.406 00:07:13.406 real 0m1.334s 00:07:13.406 user 0m3.483s 00:07:13.406 sys 0m0.246s 00:07:13.406 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.406 05:59:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:13.406 ************************************ 00:07:13.406 END TEST bdev_bounds 00:07:13.406 ************************************ 00:07:13.406 05:59:38 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.406 05:59:38 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:13.406 05:59:38 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.406 05:59:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.406 ************************************ 00:07:13.406 START TEST bdev_nbd 00:07:13.406 ************************************ 00:07:13.406 05:59:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.406 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:13.406 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73382 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73382 /var/tmp/spdk-nbd.sock 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73382 ']' 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 05:59:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:13.407 [2024-10-01 05:59:38.902531] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:13.407 [2024-10-01 05:59:38.902719] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.665 [2024-10-01 05:59:39.031962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.665 [2024-10-01 05:59:39.061712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.231 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:14.488 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:14.488 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:14.488 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:14.488 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.488 05:59:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.488 1+0 records in 00:07:14.488 1+0 records out 00:07:14.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314854 s, 13.0 MB/s 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.488 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.489 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.489 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.489 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.489 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.746 1+0 records in 00:07:14.746 1+0 records out 00:07:14.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354976 s, 11.5 MB/s 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.746 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.002 1+0 records in 00:07:15.002 1+0 records out 00:07:15.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267744 s, 15.3 MB/s 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.002 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.003 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:15.258 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.259 1+0 records in 00:07:15.259 1+0 records out 00:07:15.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417618 s, 9.8 MB/s 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.259 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:15.515 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:15.516 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:15.516 05:59:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:15.516 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:15.516 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.516 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.516 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.516 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:15.516 05:59:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.516 1+0 records in 00:07:15.516 1+0 records out 00:07:15.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437225 s, 9.4 MB/s 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.516 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.773 1+0 records in 00:07:15.773 1+0 records out 00:07:15.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330608 s, 12.4 MB/s 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.773 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:16.030 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.031 1+0 records in 00:07:16.031 1+0 records out 00:07:16.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361148 s, 11.3 MB/s 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.031 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd0", 00:07:16.289 "bdev_name": "Nvme0n1" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd1", 00:07:16.289 "bdev_name": "Nvme1n1p1" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd2", 00:07:16.289 "bdev_name": "Nvme1n1p2" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd3", 00:07:16.289 "bdev_name": "Nvme2n1" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd4", 00:07:16.289 "bdev_name": "Nvme2n2" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd5", 00:07:16.289 "bdev_name": "Nvme2n3" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd6", 00:07:16.289 "bdev_name": "Nvme3n1" 00:07:16.289 } 00:07:16.289 ]' 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd0", 00:07:16.289 "bdev_name": "Nvme0n1" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd1", 00:07:16.289 "bdev_name": "Nvme1n1p1" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd2", 00:07:16.289 "bdev_name": "Nvme1n1p2" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd3", 00:07:16.289 "bdev_name": "Nvme2n1" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd4", 00:07:16.289 "bdev_name": "Nvme2n2" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd5", 00:07:16.289 "bdev_name": "Nvme2n3" 00:07:16.289 }, 00:07:16.289 { 00:07:16.289 "nbd_device": "/dev/nbd6", 00:07:16.289 "bdev_name": "Nvme3n1" 00:07:16.289 } 00:07:16.289 ]' 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.289 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.546 05:59:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.546 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.804 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.062 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.321 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.579 05:59:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.579 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.838 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:18.096 /dev/nbd0 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.096 1+0 records in 00:07:18.096 1+0 records out 00:07:18.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397903 s, 10.3 MB/s 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.096 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:18.354 /dev/nbd1 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.354 1+0 records in 00:07:18.354 1+0 records out 00:07:18.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318571 s, 12.9 MB/s 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.354 05:59:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:18.612 /dev/nbd10 00:07:18.612 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:18.612 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:18.612 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:18.612 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.613 1+0 records in 00:07:18.613 1+0 records out 00:07:18.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403973 s, 10.1 MB/s 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.613 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:18.871 /dev/nbd11 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.871 1+0 records in 00:07:18.871 1+0 records out 00:07:18.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489203 s, 8.4 MB/s 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:18.871 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:19.129 /dev/nbd12 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.129 1+0 records in 00:07:19.129 1+0 records out 00:07:19.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315259 s, 13.0 MB/s 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.129 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:19.388 /dev/nbd13 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.388 1+0 records in 00:07:19.388 1+0 records out 00:07:19.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042468 s, 9.6 MB/s 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.388 05:59:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:19.388 /dev/nbd14 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.646 1+0 records in 00:07:19.646 1+0 records out 00:07:19.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382253 s, 10.7 MB/s 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.646 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.646 { 00:07:19.646 "nbd_device": "/dev/nbd0", 00:07:19.646 "bdev_name": "Nvme0n1" 00:07:19.646 }, 00:07:19.646 { 00:07:19.646 "nbd_device": "/dev/nbd1", 00:07:19.647 "bdev_name": "Nvme1n1p1" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd10", 00:07:19.647 "bdev_name": "Nvme1n1p2" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd11", 00:07:19.647 "bdev_name": "Nvme2n1" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd12", 00:07:19.647 "bdev_name": "Nvme2n2" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd13", 00:07:19.647 "bdev_name": "Nvme2n3" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd14", 00:07:19.647 "bdev_name": "Nvme3n1" 00:07:19.647 } 00:07:19.647 ]' 00:07:19.647 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd0", 00:07:19.647 "bdev_name": "Nvme0n1" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd1", 00:07:19.647 "bdev_name": "Nvme1n1p1" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd10", 00:07:19.647 "bdev_name": "Nvme1n1p2" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd11", 00:07:19.647 "bdev_name": "Nvme2n1" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd12", 00:07:19.647 "bdev_name": "Nvme2n2" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd13", 00:07:19.647 "bdev_name": "Nvme2n3" 00:07:19.647 }, 00:07:19.647 { 00:07:19.647 "nbd_device": "/dev/nbd14", 00:07:19.647 "bdev_name": "Nvme3n1" 00:07:19.647 } 00:07:19.647 ]' 00:07:19.647 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:19.905 /dev/nbd1 00:07:19.905 /dev/nbd10 00:07:19.905 /dev/nbd11 00:07:19.905 /dev/nbd12 00:07:19.905 /dev/nbd13 00:07:19.905 /dev/nbd14' 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:19.905 /dev/nbd1 00:07:19.905 /dev/nbd10 00:07:19.905 /dev/nbd11 00:07:19.905 /dev/nbd12 00:07:19.905 /dev/nbd13 00:07:19.905 /dev/nbd14' 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:19.905 256+0 records in 00:07:19.905 256+0 records out 00:07:19.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122546 s, 85.6 MB/s 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.905 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:19.905 256+0 records in 00:07:19.906 256+0 records out 00:07:19.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0722328 s, 14.5 MB/s 00:07:19.906 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.906 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:19.906 256+0 records in 00:07:19.906 256+0 records out 00:07:19.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0744192 s, 14.1 MB/s 00:07:19.906 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.906 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:19.906 256+0 records in 00:07:19.906 256+0 records out 00:07:19.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0732913 s, 14.3 MB/s 00:07:19.906 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.906 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:20.165 256+0 records in 00:07:20.165 256+0 records out 00:07:20.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106806 s, 9.8 MB/s 00:07:20.165 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.165 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:20.165 256+0 records in 00:07:20.165 256+0 records out 00:07:20.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0718027 s, 14.6 MB/s 00:07:20.165 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.165 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:20.165 256+0 records in 00:07:20.165 256+0 records out 00:07:20.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0721585 s, 14.5 MB/s 00:07:20.165 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.165 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:20.423 256+0 records in 00:07:20.423 256+0 records out 00:07:20.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0751467 s, 14.0 MB/s 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.423 05:59:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.682 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.940 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.198 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.456 05:59:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.714 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.715 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.974 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:22.277 malloc_lvol_verify 00:07:22.277 05:59:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:22.534 82c068f7-3e8a-47df-9176-5d640e6a0259 00:07:22.534 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:22.791 d5d8ec75-62c4-48ab-a3ce-e647c798ade6 00:07:22.791 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:22.791 /dev/nbd0 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:22.792 mke2fs 1.47.0 (5-Feb-2023) 00:07:22.792 Discarding device blocks: 0/4096 done 00:07:22.792 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:22.792 00:07:22.792 Allocating group tables: 0/1 done 00:07:22.792 Writing inode tables: 0/1 done 00:07:22.792 Creating journal (1024 blocks): done 00:07:22.792 Writing superblocks and filesystem accounting information: 0/1 done 00:07:22.792 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.792 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73382 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73382 ']' 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73382 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73382 00:07:23.050 killing process with pid 73382 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73382' 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73382 00:07:23.050 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73382 00:07:23.308 ************************************ 00:07:23.308 END TEST bdev_nbd 00:07:23.308 ************************************ 00:07:23.308 05:59:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:23.308 00:07:23.308 real 0m9.932s 00:07:23.308 user 0m14.554s 00:07:23.308 sys 0m3.341s 00:07:23.308 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.308 05:59:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:23.308 05:59:48 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:23.308 05:59:48 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:23.308 05:59:48 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:23.308 skipping fio tests on NVMe due to multi-ns failures. 00:07:23.308 05:59:48 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:23.308 05:59:48 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:23.308 05:59:48 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:23.308 05:59:48 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:23.308 05:59:48 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.308 05:59:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.308 ************************************ 00:07:23.308 START TEST bdev_verify 00:07:23.308 ************************************ 00:07:23.308 05:59:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:23.308 [2024-10-01 05:59:48.882596] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:23.308 [2024-10-01 05:59:48.882706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73789 ] 00:07:23.567 [2024-10-01 05:59:49.019256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.567 [2024-10-01 05:59:49.053681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.567 [2024-10-01 05:59:49.053809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.133 Running I/O for 5 seconds... 00:07:29.312 17024.00 IOPS, 66.50 MiB/s 17376.00 IOPS, 67.88 MiB/s 20288.00 IOPS, 79.25 MiB/s 20992.00 IOPS, 82.00 MiB/s 21363.20 IOPS, 83.45 MiB/s 00:07:29.312 Latency(us) 00:07:29.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.312 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x0 length 0xbd0bd 00:07:29.312 Nvme0n1 : 5.08 1523.07 5.95 0.00 0.00 83594.75 7612.26 91952.05 00:07:29.312 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:29.312 Nvme0n1 : 5.06 1468.56 5.74 0.00 0.00 86752.57 12451.84 93968.54 00:07:29.312 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x0 length 0x4ff80 00:07:29.312 Nvme1n1p1 : 5.09 1521.63 5.94 0.00 0.00 83530.45 10737.82 89532.26 00:07:29.312 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:29.312 Nvme1n1p1 : 5.06 1468.13 5.73 0.00 0.00 86558.30 13812.97 88322.36 00:07:29.312 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x0 length 0x4ff7f 00:07:29.312 Nvme1n1p2 : 5.09 1521.07 5.94 0.00 0.00 83395.11 10788.23 85499.27 00:07:29.312 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:29.312 Nvme1n1p2 : 5.08 1472.81 5.75 0.00 0.00 86140.37 7713.08 85095.98 00:07:29.312 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x0 length 0x80000 00:07:29.312 Nvme2n1 : 5.10 1530.57 5.98 0.00 0.00 83005.70 7662.67 81062.99 00:07:29.312 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x80000 length 0x80000 00:07:29.312 Nvme2n1 : 5.09 1471.96 5.75 0.00 0.00 85984.61 9779.99 81869.59 00:07:29.312 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x0 length 0x80000 00:07:29.312 Nvme2n2 : 5.11 1529.41 5.97 0.00 0.00 82877.27 10637.00 81062.99 00:07:29.312 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x80000 length 0x80000 00:07:29.312 Nvme2n2 : 5.10 1481.24 5.79 0.00 0.00 85467.97 8922.98 84692.68 00:07:29.312 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x0 length 0x80000 00:07:29.312 Nvme2n3 : 5.11 1529.01 5.97 0.00 0.00 82731.81 10889.06 86305.87 00:07:29.312 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x80000 length 0x80000 00:07:29.312 Nvme2n3 : 5.10 1480.83 5.78 0.00 0.00 85302.47 9326.28 89128.96 00:07:29.312 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x0 length 0x20000 00:07:29.312 Nvme3n1 : 5.11 1528.59 5.97 0.00 0.00 82584.71 10838.65 91952.05 00:07:29.312 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.312 Verification LBA range: start 0x20000 length 0x20000 00:07:29.312 Nvme3n1 : 5.10 1480.39 5.78 0.00 0.00 85202.26 9427.10 92758.65 00:07:29.312 =================================================================================================================== 00:07:29.312 Total : 21007.27 82.06 0.00 0.00 84480.41 7612.26 93968.54 00:07:29.879 00:07:29.879 real 0m6.380s 00:07:29.879 user 0m12.068s 00:07:29.879 sys 0m0.196s 00:07:29.879 05:59:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.879 05:59:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:29.879 ************************************ 00:07:29.879 END TEST bdev_verify 00:07:29.879 ************************************ 00:07:29.879 05:59:55 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:29.879 05:59:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:29.879 05:59:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.879 05:59:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.879 ************************************ 00:07:29.879 START TEST bdev_verify_big_io 00:07:29.879 ************************************ 00:07:29.879 05:59:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:29.879 [2024-10-01 05:59:55.299861] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:29.879 [2024-10-01 05:59:55.299967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73876 ] 00:07:29.879 [2024-10-01 05:59:55.436427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.879 [2024-10-01 05:59:55.470117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.879 [2024-10-01 05:59:55.470360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.445 Running I/O for 5 seconds... 00:07:36.602 1744.00 IOPS, 109.00 MiB/s 2845.00 IOPS, 177.81 MiB/s 3373.67 IOPS, 210.85 MiB/s 00:07:36.602 Latency(us) 00:07:36.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.602 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.602 Verification LBA range: start 0x0 length 0xbd0b 00:07:36.602 Nvme0n1 : 5.89 112.89 7.06 0.00 0.00 1073517.47 9477.51 1380893.93 00:07:36.602 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.602 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:36.602 Nvme0n1 : 6.17 82.97 5.19 0.00 0.00 1447176.52 12703.90 1884210.41 00:07:36.602 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.602 Verification LBA range: start 0x0 length 0x4ff8 00:07:36.602 Nvme1n1p1 : 5.74 108.11 6.76 0.00 0.00 1093392.14 106470.79 1755154.90 00:07:36.602 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.602 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:36.603 Nvme1n1p1 : 5.96 117.53 7.35 0.00 0.00 1009164.23 83079.48 1084066.26 00:07:36.603 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x0 length 0x4ff7 00:07:36.603 Nvme1n1p2 : 5.98 110.90 6.93 0.00 0.00 1022334.23 120182.94 1768060.46 00:07:36.603 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:36.603 Nvme1n1p2 : 6.09 122.27 7.64 0.00 0.00 946005.11 60494.77 896935.78 00:07:36.603 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x0 length 0x8000 00:07:36.603 Nvme2n1 : 6.09 118.59 7.41 0.00 0.00 932542.79 44362.83 1793871.56 00:07:36.603 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x8000 length 0x8000 00:07:36.603 Nvme2n1 : 6.02 121.25 7.58 0.00 0.00 927993.63 60091.47 1025991.29 00:07:36.603 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x0 length 0x8000 00:07:36.603 Nvme2n2 : 6.09 123.32 7.71 0.00 0.00 871341.90 63317.86 1819682.66 00:07:36.603 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x8000 length 0x8000 00:07:36.603 Nvme2n2 : 6.09 126.17 7.89 0.00 0.00 862814.52 63317.86 974369.08 00:07:36.603 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x0 length 0x8000 00:07:36.603 Nvme2n3 : 6.20 137.00 8.56 0.00 0.00 759776.15 8872.57 1858399.31 00:07:36.603 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x8000 length 0x8000 00:07:36.603 Nvme2n3 : 6.15 128.61 8.04 0.00 0.00 814508.41 60494.77 993727.41 00:07:36.603 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x0 length 0x2000 00:07:36.603 Nvme3n1 : 6.26 188.09 11.76 0.00 0.00 537872.42 409.60 1458327.24 00:07:36.603 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.603 Verification LBA range: start 0x2000 length 0x2000 00:07:36.603 Nvme3n1 : 6.18 144.96 9.06 0.00 0.00 709411.67 2281.16 1013085.74 00:07:36.603 =================================================================================================================== 00:07:36.603 Total : 1742.66 108.92 0.00 0.00 892216.05 409.60 1884210.41 00:07:37.537 00:07:37.537 real 0m7.787s 00:07:37.537 user 0m14.234s 00:07:37.537 sys 0m0.198s 00:07:37.537 06:00:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.537 06:00:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:37.537 ************************************ 00:07:37.537 END TEST bdev_verify_big_io 00:07:37.537 ************************************ 00:07:37.538 06:00:03 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.538 06:00:03 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:37.538 06:00:03 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.538 06:00:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.538 ************************************ 00:07:37.538 START TEST bdev_write_zeroes 00:07:37.538 ************************************ 00:07:37.538 06:00:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.538 [2024-10-01 06:00:03.121381] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:37.538 [2024-10-01 06:00:03.121484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73981 ] 00:07:37.796 [2024-10-01 06:00:03.253071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.796 [2024-10-01 06:00:03.286357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.362 Running I/O for 1 seconds... 00:07:39.294 64512.00 IOPS, 252.00 MiB/s 00:07:39.294 Latency(us) 00:07:39.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.294 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.294 Nvme0n1 : 1.02 9188.48 35.89 0.00 0.00 13899.01 10989.88 25811.10 00:07:39.294 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.294 Nvme1n1p1 : 1.03 9177.25 35.85 0.00 0.00 13896.93 10737.82 26416.05 00:07:39.294 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.294 Nvme1n1p2 : 1.03 9166.03 35.80 0.00 0.00 13846.87 10687.41 26214.40 00:07:39.295 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.295 Nvme2n1 : 1.03 9155.72 35.76 0.00 0.00 13842.70 10989.88 26012.75 00:07:39.295 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.295 Nvme2n2 : 1.03 9145.35 35.72 0.00 0.00 13798.41 9023.80 26214.40 00:07:39.295 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.295 Nvme2n3 : 1.03 9135.10 35.68 0.00 0.00 13769.80 7158.55 25811.10 00:07:39.295 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.295 Nvme3n1 : 1.03 9124.87 35.64 0.00 0.00 13760.73 6452.78 25609.45 00:07:39.295 =================================================================================================================== 00:07:39.295 Total : 64092.79 250.36 0.00 0.00 13830.64 6452.78 26416.05 00:07:39.295 00:07:39.295 real 0m1.825s 00:07:39.295 user 0m1.567s 00:07:39.295 sys 0m0.148s 00:07:39.295 06:00:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.295 ************************************ 00:07:39.295 END TEST bdev_write_zeroes 00:07:39.295 ************************************ 00:07:39.295 06:00:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:39.552 06:00:04 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.552 06:00:04 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:39.552 06:00:04 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.552 06:00:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.552 ************************************ 00:07:39.552 START TEST bdev_json_nonenclosed 00:07:39.552 ************************************ 00:07:39.552 06:00:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.552 [2024-10-01 06:00:05.008214] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:39.552 [2024-10-01 06:00:05.008345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74019 ] 00:07:39.552 [2024-10-01 06:00:05.144701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.810 [2024-10-01 06:00:05.178323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.811 [2024-10-01 06:00:05.178421] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:39.811 [2024-10-01 06:00:05.178436] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:39.811 [2024-10-01 06:00:05.178447] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.811 00:07:39.811 real 0m0.308s 00:07:39.811 user 0m0.121s 00:07:39.811 sys 0m0.085s 00:07:39.811 06:00:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.811 06:00:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:39.811 ************************************ 00:07:39.811 END TEST bdev_json_nonenclosed 00:07:39.811 ************************************ 00:07:39.811 06:00:05 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.811 06:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:39.811 06:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.811 06:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.811 ************************************ 00:07:39.811 START TEST bdev_json_nonarray 00:07:39.811 ************************************ 00:07:39.811 06:00:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:39.811 [2024-10-01 06:00:05.373610] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:39.811 [2024-10-01 06:00:05.373720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74043 ] 00:07:40.069 [2024-10-01 06:00:05.510365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.069 [2024-10-01 06:00:05.551963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.069 [2024-10-01 06:00:05.552055] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:40.069 [2024-10-01 06:00:05.552072] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.069 [2024-10-01 06:00:05.552083] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.069 00:07:40.069 real 0m0.314s 00:07:40.069 user 0m0.127s 00:07:40.069 sys 0m0.084s 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.069 ************************************ 00:07:40.069 END TEST bdev_json_nonarray 00:07:40.069 ************************************ 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:40.069 06:00:05 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:40.069 06:00:05 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:40.069 06:00:05 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:40.069 06:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.069 06:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.069 06:00:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.069 ************************************ 00:07:40.069 START TEST bdev_gpt_uuid 00:07:40.069 ************************************ 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74063 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74063 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74063 ']' 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:40.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.069 06:00:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:40.327 [2024-10-01 06:00:05.744004] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:40.327 [2024-10-01 06:00:05.744118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74063 ] 00:07:40.327 [2024-10-01 06:00:05.878367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.327 [2024-10-01 06:00:05.911918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.261 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.261 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:41.261 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.261 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.261 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:41.519 Some configs were skipped because the RPC state that can call them passed over. 00:07:41.519 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.519 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:41.519 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.519 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:41.519 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.519 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:41.519 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.519 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:41.520 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.520 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:41.520 { 00:07:41.520 "name": "Nvme1n1p1", 00:07:41.520 "aliases": [ 00:07:41.520 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:41.520 ], 00:07:41.520 "product_name": "GPT Disk", 00:07:41.520 "block_size": 4096, 00:07:41.520 "num_blocks": 655104, 00:07:41.520 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:41.520 "assigned_rate_limits": { 00:07:41.520 "rw_ios_per_sec": 0, 00:07:41.520 "rw_mbytes_per_sec": 0, 00:07:41.520 "r_mbytes_per_sec": 0, 00:07:41.520 "w_mbytes_per_sec": 0 00:07:41.520 }, 00:07:41.520 "claimed": false, 00:07:41.520 "zoned": false, 00:07:41.520 "supported_io_types": { 00:07:41.520 "read": true, 00:07:41.520 "write": true, 00:07:41.520 "unmap": true, 00:07:41.520 "flush": true, 00:07:41.520 "reset": true, 00:07:41.520 "nvme_admin": false, 00:07:41.520 "nvme_io": false, 00:07:41.520 "nvme_io_md": false, 00:07:41.520 "write_zeroes": true, 00:07:41.520 "zcopy": false, 00:07:41.520 "get_zone_info": false, 00:07:41.520 "zone_management": false, 00:07:41.520 "zone_append": false, 00:07:41.520 "compare": true, 00:07:41.520 "compare_and_write": false, 00:07:41.520 "abort": true, 00:07:41.520 "seek_hole": false, 00:07:41.520 "seek_data": false, 00:07:41.520 "copy": true, 00:07:41.520 "nvme_iov_md": false 00:07:41.520 }, 00:07:41.520 "driver_specific": { 00:07:41.520 "gpt": { 00:07:41.520 "base_bdev": "Nvme1n1", 00:07:41.520 "offset_blocks": 256, 00:07:41.520 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:41.520 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:41.520 "partition_name": "SPDK_TEST_first" 00:07:41.520 } 00:07:41.520 } 00:07:41.520 } 00:07:41.520 ]' 00:07:41.520 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:41.520 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:41.520 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:41.520 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:41.520 06:00:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:41.520 { 00:07:41.520 "name": "Nvme1n1p2", 00:07:41.520 "aliases": [ 00:07:41.520 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:41.520 ], 00:07:41.520 "product_name": "GPT Disk", 00:07:41.520 "block_size": 4096, 00:07:41.520 "num_blocks": 655103, 00:07:41.520 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:41.520 "assigned_rate_limits": { 00:07:41.520 "rw_ios_per_sec": 0, 00:07:41.520 "rw_mbytes_per_sec": 0, 00:07:41.520 "r_mbytes_per_sec": 0, 00:07:41.520 "w_mbytes_per_sec": 0 00:07:41.520 }, 00:07:41.520 "claimed": false, 00:07:41.520 "zoned": false, 00:07:41.520 "supported_io_types": { 00:07:41.520 "read": true, 00:07:41.520 "write": true, 00:07:41.520 "unmap": true, 00:07:41.520 "flush": true, 00:07:41.520 "reset": true, 00:07:41.520 "nvme_admin": false, 00:07:41.520 "nvme_io": false, 00:07:41.520 "nvme_io_md": false, 00:07:41.520 "write_zeroes": true, 00:07:41.520 "zcopy": false, 00:07:41.520 "get_zone_info": false, 00:07:41.520 "zone_management": false, 00:07:41.520 "zone_append": false, 00:07:41.520 "compare": true, 00:07:41.520 "compare_and_write": false, 00:07:41.520 "abort": true, 00:07:41.520 "seek_hole": false, 00:07:41.520 "seek_data": false, 00:07:41.520 "copy": true, 00:07:41.520 "nvme_iov_md": false 00:07:41.520 }, 00:07:41.520 "driver_specific": { 00:07:41.520 "gpt": { 00:07:41.520 "base_bdev": "Nvme1n1", 00:07:41.520 "offset_blocks": 655360, 00:07:41.520 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:41.520 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:41.520 "partition_name": "SPDK_TEST_second" 00:07:41.520 } 00:07:41.520 } 00:07:41.520 } 00:07:41.520 ]' 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:41.520 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:41.778 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74063 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74063 ']' 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74063 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74063 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.779 killing process with pid 74063 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74063' 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74063 00:07:41.779 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74063 00:07:42.037 00:07:42.037 real 0m1.759s 00:07:42.037 user 0m1.909s 00:07:42.037 sys 0m0.358s 00:07:42.037 ************************************ 00:07:42.037 END TEST bdev_gpt_uuid 00:07:42.037 ************************************ 00:07:42.037 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.037 06:00:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:42.037 06:00:07 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:42.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.555 Waiting for block devices as requested 00:07:42.555 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.555 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.555 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.813 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.075 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:48.075 06:00:13 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:48.075 06:00:13 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:48.075 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:48.075 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:48.075 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:48.075 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:48.075 06:00:13 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:48.075 00:07:48.075 real 0m47.713s 00:07:48.075 user 1m0.370s 00:07:48.075 sys 0m7.306s 00:07:48.075 06:00:13 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.075 06:00:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.075 ************************************ 00:07:48.075 END TEST blockdev_nvme_gpt 00:07:48.075 ************************************ 00:07:48.075 06:00:13 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:48.075 06:00:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.075 06:00:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.075 06:00:13 -- common/autotest_common.sh@10 -- # set +x 00:07:48.075 ************************************ 00:07:48.075 START TEST nvme 00:07:48.075 ************************************ 00:07:48.075 06:00:13 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:48.075 * Looking for test storage... 00:07:48.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:48.075 06:00:13 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:48.075 06:00:13 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:48.075 06:00:13 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:48.333 06:00:13 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:48.333 06:00:13 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.333 06:00:13 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.333 06:00:13 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.333 06:00:13 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.333 06:00:13 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.333 06:00:13 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.333 06:00:13 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.333 06:00:13 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.333 06:00:13 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.333 06:00:13 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.333 06:00:13 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.333 06:00:13 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:48.333 06:00:13 nvme -- scripts/common.sh@345 -- # : 1 00:07:48.333 06:00:13 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.333 06:00:13 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.333 06:00:13 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:48.333 06:00:13 nvme -- scripts/common.sh@353 -- # local d=1 00:07:48.333 06:00:13 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.333 06:00:13 nvme -- scripts/common.sh@355 -- # echo 1 00:07:48.333 06:00:13 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.333 06:00:13 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:48.333 06:00:13 nvme -- scripts/common.sh@353 -- # local d=2 00:07:48.333 06:00:13 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.333 06:00:13 nvme -- scripts/common.sh@355 -- # echo 2 00:07:48.333 06:00:13 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.333 06:00:13 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.333 06:00:13 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.333 06:00:13 nvme -- scripts/common.sh@368 -- # return 0 00:07:48.333 06:00:13 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.333 06:00:13 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:48.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.333 --rc genhtml_branch_coverage=1 00:07:48.333 --rc genhtml_function_coverage=1 00:07:48.333 --rc genhtml_legend=1 00:07:48.333 --rc geninfo_all_blocks=1 00:07:48.333 --rc geninfo_unexecuted_blocks=1 00:07:48.333 00:07:48.333 ' 00:07:48.333 06:00:13 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:48.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.333 --rc genhtml_branch_coverage=1 00:07:48.333 --rc genhtml_function_coverage=1 00:07:48.333 --rc genhtml_legend=1 00:07:48.333 --rc geninfo_all_blocks=1 00:07:48.333 --rc geninfo_unexecuted_blocks=1 00:07:48.333 00:07:48.333 ' 00:07:48.333 06:00:13 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:48.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.333 --rc genhtml_branch_coverage=1 00:07:48.333 --rc genhtml_function_coverage=1 00:07:48.333 --rc genhtml_legend=1 00:07:48.333 --rc geninfo_all_blocks=1 00:07:48.333 --rc geninfo_unexecuted_blocks=1 00:07:48.333 00:07:48.333 ' 00:07:48.333 06:00:13 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:48.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.333 --rc genhtml_branch_coverage=1 00:07:48.333 --rc genhtml_function_coverage=1 00:07:48.333 --rc genhtml_legend=1 00:07:48.333 --rc geninfo_all_blocks=1 00:07:48.333 --rc geninfo_unexecuted_blocks=1 00:07:48.333 00:07:48.333 ' 00:07:48.333 06:00:13 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:48.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.158 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.158 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.158 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.158 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.415 06:00:14 nvme -- nvme/nvme.sh@79 -- # uname 00:07:49.415 06:00:14 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:49.415 06:00:14 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:49.415 06:00:14 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1071 -- # stubpid=74690 00:07:49.415 Waiting for stub to ready for secondary processes... 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/74690 ]] 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:49.415 06:00:14 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:49.415 [2024-10-01 06:00:14.841037] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:49.416 [2024-10-01 06:00:14.841160] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:49.980 [2024-10-01 06:00:15.570421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.980 [2024-10-01 06:00:15.590523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.980 [2024-10-01 06:00:15.590818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.980 [2024-10-01 06:00:15.590818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.238 [2024-10-01 06:00:15.601509] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:50.238 [2024-10-01 06:00:15.601546] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:50.238 [2024-10-01 06:00:15.614679] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:50.238 [2024-10-01 06:00:15.614860] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:50.238 [2024-10-01 06:00:15.615430] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:50.238 [2024-10-01 06:00:15.615591] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:50.238 [2024-10-01 06:00:15.615646] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:50.238 [2024-10-01 06:00:15.616565] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:50.238 [2024-10-01 06:00:15.616731] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:50.238 [2024-10-01 06:00:15.616772] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:50.238 [2024-10-01 06:00:15.618210] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:50.238 [2024-10-01 06:00:15.618424] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:50.238 [2024-10-01 06:00:15.618500] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:50.238 [2024-10-01 06:00:15.618558] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:50.238 [2024-10-01 06:00:15.618618] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:50.238 done. 00:07:50.238 06:00:15 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:50.238 06:00:15 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:50.238 06:00:15 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:50.238 06:00:15 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:50.238 06:00:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.238 06:00:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.238 ************************************ 00:07:50.238 START TEST nvme_reset 00:07:50.238 ************************************ 00:07:50.238 06:00:15 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:50.496 Initializing NVMe Controllers 00:07:50.496 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:50.496 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:50.496 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:50.496 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:50.496 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:50.496 00:07:50.496 real 0m0.180s 00:07:50.496 user 0m0.054s 00:07:50.496 sys 0m0.081s 00:07:50.496 06:00:16 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.496 ************************************ 00:07:50.496 06:00:16 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:50.496 END TEST nvme_reset 00:07:50.496 ************************************ 00:07:50.496 06:00:16 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:50.496 06:00:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.496 06:00:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.496 06:00:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.496 ************************************ 00:07:50.496 START TEST nvme_identify 00:07:50.496 ************************************ 00:07:50.496 06:00:16 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:50.496 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:50.496 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:50.496 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:50.496 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:50.496 06:00:16 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:50.496 06:00:16 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:50.496 06:00:16 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:50.496 06:00:16 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:50.497 06:00:16 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:50.758 06:00:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:50.758 06:00:16 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:50.758 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:50.758 [2024-10-01 06:00:16.286416] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 74711 terminated unexpected 00:07:50.758 ===================================================== 00:07:50.758 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.758 ===================================================== 00:07:50.758 Controller Capabilities/Features 00:07:50.758 ================================ 00:07:50.758 Vendor ID: 1b36 00:07:50.758 Subsystem Vendor ID: 1af4 00:07:50.758 Serial Number: 12343 00:07:50.758 Model Number: QEMU NVMe Ctrl 00:07:50.758 Firmware Version: 8.0.0 00:07:50.758 Recommended Arb Burst: 6 00:07:50.758 IEEE OUI Identifier: 00 54 52 00:07:50.758 Multi-path I/O 00:07:50.758 May have multiple subsystem ports: No 00:07:50.758 May have multiple controllers: Yes 00:07:50.758 Associated with SR-IOV VF: No 00:07:50.758 Max Data Transfer Size: 524288 00:07:50.758 Max Number of Namespaces: 256 00:07:50.758 Max Number of I/O Queues: 64 00:07:50.758 NVMe Specification Version (VS): 1.4 00:07:50.758 NVMe Specification Version (Identify): 1.4 00:07:50.758 Maximum Queue Entries: 2048 00:07:50.758 Contiguous Queues Required: Yes 00:07:50.758 Arbitration Mechanisms Supported 00:07:50.758 Weighted Round Robin: Not Supported 00:07:50.758 Vendor Specific: Not Supported 00:07:50.758 Reset Timeout: 7500 ms 00:07:50.758 Doorbell Stride: 4 bytes 00:07:50.758 NVM Subsystem Reset: Not Supported 00:07:50.758 Command Sets Supported 00:07:50.758 NVM Command Set: Supported 00:07:50.758 Boot Partition: Not Supported 00:07:50.758 Memory Page Size Minimum: 4096 bytes 00:07:50.758 Memory Page Size Maximum: 65536 bytes 00:07:50.758 Persistent Memory Region: Not Supported 00:07:50.758 Optional Asynchronous Events Supported 00:07:50.758 Namespace Attribute Notices: Supported 00:07:50.758 Firmware Activation Notices: Not Supported 00:07:50.758 ANA Change Notices: Not Supported 00:07:50.758 PLE Aggregate Log Change Notices: Not Supported 00:07:50.758 LBA Status Info Alert Notices: Not Supported 00:07:50.758 EGE Aggregate Log Change Notices: Not Supported 00:07:50.758 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.758 Zone Descriptor Change Notices: Not Supported 00:07:50.758 Discovery Log Change Notices: Not Supported 00:07:50.758 Controller Attributes 00:07:50.758 128-bit Host Identifier: Not Supported 00:07:50.758 Non-Operational Permissive Mode: Not Supported 00:07:50.758 NVM Sets: Not Supported 00:07:50.758 Read Recovery Levels: Not Supported 00:07:50.758 Endurance Groups: Supported 00:07:50.758 Predictable Latency Mode: Not Supported 00:07:50.758 Traffic Based Keep ALive: Not Supported 00:07:50.758 Namespace Granularity: Not Supported 00:07:50.758 SQ Associations: Not Supported 00:07:50.758 UUID List: Not Supported 00:07:50.758 Multi-Domain Subsystem: Not Supported 00:07:50.758 Fixed Capacity Management: Not Supported 00:07:50.758 Variable Capacity Management: Not Supported 00:07:50.758 Delete Endurance Group: Not Supported 00:07:50.758 Delete NVM Set: Not Supported 00:07:50.758 Extended LBA Formats Supported: Supported 00:07:50.758 Flexible Data Placement Supported: Supported 00:07:50.758 00:07:50.758 Controller Memory Buffer Support 00:07:50.758 ================================ 00:07:50.758 Supported: No 00:07:50.758 00:07:50.758 Persistent Memory Region Support 00:07:50.758 ================================ 00:07:50.758 Supported: No 00:07:50.758 00:07:50.758 Admin Command Set Attributes 00:07:50.758 ============================ 00:07:50.758 Security Send/Receive: Not Supported 00:07:50.758 Format NVM: Supported 00:07:50.758 Firmware Activate/Download: Not Supported 00:07:50.758 Namespace Management: Supported 00:07:50.758 Device Self-Test: Not Supported 00:07:50.758 Directives: Supported 00:07:50.758 NVMe-MI: Not Supported 00:07:50.758 Virtualization Management: Not Supported 00:07:50.758 Doorbell Buffer Config: Supported 00:07:50.758 Get LBA Status Capability: Not Supported 00:07:50.758 Command & Feature Lockdown Capability: Not Supported 00:07:50.758 Abort Command Limit: 4 00:07:50.758 Async Event Request Limit: 4 00:07:50.758 Number of Firmware Slots: N/A 00:07:50.758 Firmware Slot 1 Read-Only: N/A 00:07:50.758 Firmware Activation Without Reset: N/A 00:07:50.758 Multiple Update Detection Support: N/A 00:07:50.758 Firmware Update Granularity: No Information Provided 00:07:50.758 Per-Namespace SMART Log: Yes 00:07:50.758 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.758 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:50.758 Command Effects Log Page: Supported 00:07:50.758 Get Log Page Extended Data: Supported 00:07:50.758 Telemetry Log Pages: Not Supported 00:07:50.758 Persistent Event Log Pages: Not Supported 00:07:50.758 Supported Log Pages Log Page: May Support 00:07:50.758 Commands Supported & Effects Log Page: Not Supported 00:07:50.758 Feature Identifiers & Effects Log Page:May Support 00:07:50.758 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.758 Data Area 4 for Telemetry Log: Not Supported 00:07:50.758 Error Log Page Entries Supported: 1 00:07:50.758 Keep Alive: Not Supported 00:07:50.758 00:07:50.758 NVM Command Set Attributes 00:07:50.758 ========================== 00:07:50.758 Submission Queue Entry Size 00:07:50.758 Max: 64 00:07:50.758 Min: 64 00:07:50.758 Completion Queue Entry Size 00:07:50.758 Max: 16 00:07:50.758 Min: 16 00:07:50.758 Number of Namespaces: 256 00:07:50.758 Compare Command: Supported 00:07:50.758 Write Uncorrectable Command: Not Supported 00:07:50.758 Dataset Management Command: Supported 00:07:50.758 Write Zeroes Command: Supported 00:07:50.758 Set Features Save Field: Supported 00:07:50.758 Reservations: Not Supported 00:07:50.758 Timestamp: Supported 00:07:50.758 Copy: Supported 00:07:50.758 Volatile Write Cache: Present 00:07:50.758 Atomic Write Unit (Normal): 1 00:07:50.758 Atomic Write Unit (PFail): 1 00:07:50.758 Atomic Compare & Write Unit: 1 00:07:50.758 Fused Compare & Write: Not Supported 00:07:50.758 Scatter-Gather List 00:07:50.758 SGL Command Set: Supported 00:07:50.758 SGL Keyed: Not Supported 00:07:50.758 SGL Bit Bucket Descriptor: Not Supported 00:07:50.758 SGL Metadata Pointer: Not Supported 00:07:50.758 Oversized SGL: Not Supported 00:07:50.758 SGL Metadata Address: Not Supported 00:07:50.758 SGL Offset: Not Supported 00:07:50.758 Transport SGL Data Block: Not Supported 00:07:50.758 Replay Protected Memory Block: Not Supported 00:07:50.758 00:07:50.758 Firmware Slot Information 00:07:50.758 ========================= 00:07:50.758 Active slot: 1 00:07:50.758 Slot 1 Firmware Revision: 1.0 00:07:50.758 00:07:50.758 00:07:50.758 Commands Supported and Effects 00:07:50.758 ============================== 00:07:50.758 Admin Commands 00:07:50.758 -------------- 00:07:50.758 Delete I/O Submission Queue (00h): Supported 00:07:50.758 Create I/O Submission Queue (01h): Supported 00:07:50.758 Get Log Page (02h): Supported 00:07:50.758 Delete I/O Completion Queue (04h): Supported 00:07:50.758 Create I/O Completion Queue (05h): Supported 00:07:50.758 Identify (06h): Supported 00:07:50.758 Abort (08h): Supported 00:07:50.758 Set Features (09h): Supported 00:07:50.758 Get Features (0Ah): Supported 00:07:50.758 Asynchronous Event Request (0Ch): Supported 00:07:50.758 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.758 Directive Send (19h): Supported 00:07:50.758 Directive Receive (1Ah): Supported 00:07:50.758 Virtualization Management (1Ch): Supported 00:07:50.758 Doorbell Buffer Config (7Ch): Supported 00:07:50.759 Format NVM (80h): Supported LBA-Change 00:07:50.759 I/O Commands 00:07:50.759 ------------ 00:07:50.759 Flush (00h): Supported LBA-Change 00:07:50.759 Write (01h): Supported LBA-Change 00:07:50.759 Read (02h): Supported 00:07:50.759 Compare (05h): Supported 00:07:50.759 Write Zeroes (08h): Supported LBA-Change 00:07:50.759 Dataset Management (09h): Supported LBA-Change 00:07:50.759 Unknown (0Ch): Supported 00:07:50.759 Unknown (12h): Supported 00:07:50.759 Copy (19h): Supported LBA-Change 00:07:50.759 Unknown (1Dh): Supported LBA-Change 00:07:50.759 00:07:50.759 Error Log 00:07:50.759 ========= 00:07:50.759 00:07:50.759 Arbitration 00:07:50.759 =========== 00:07:50.759 Arbitration Burst: no limit 00:07:50.759 00:07:50.759 Power Management 00:07:50.759 ================ 00:07:50.759 Number of Power States: 1 00:07:50.759 Current Power State: Power State #0 00:07:50.759 Power State #0: 00:07:50.759 Max Power: 25.00 W 00:07:50.759 Non-Operational State: Operational 00:07:50.759 Entry Latency: 16 microseconds 00:07:50.759 Exit Latency: 4 microseconds 00:07:50.759 Relative Read Throughput: 0 00:07:50.759 Relative Read Latency: 0 00:07:50.759 Relative Write Throughput: 0 00:07:50.759 Relative Write Latency: 0 00:07:50.759 Idle Power: Not Reported 00:07:50.759 Active Power: Not Reported 00:07:50.759 Non-Operational Permissive Mode: Not Supported 00:07:50.759 00:07:50.759 Health Information 00:07:50.759 ================== 00:07:50.759 Critical Warnings: 00:07:50.759 Available Spare Space: OK 00:07:50.759 Temperature: OK 00:07:50.759 Device Reliability: OK 00:07:50.759 Read Only: No 00:07:50.759 Volatile Memory Backup: OK 00:07:50.759 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.759 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.759 Available Spare: 0% 00:07:50.759 Available Spare Threshold: 0% 00:07:50.759 Life Percentage Used: 0% 00:07:50.759 Data Units Read: 863 00:07:50.759 Data Units Written: 792 00:07:50.759 Host Read Commands: 39335 00:07:50.759 Host Write Commands: 38758 00:07:50.759 Controller Busy Time: 0 minutes 00:07:50.759 Power Cycles: 0 00:07:50.759 Power On Hours: 0 hours 00:07:50.759 Unsafe Shutdowns: 0 00:07:50.759 Unrecoverable Media Errors: 0 00:07:50.759 Lifetime Error Log Entries: 0 00:07:50.759 Warning Temperature Time: 0 minutes 00:07:50.759 Critical Temperature Time: 0 minutes 00:07:50.759 00:07:50.759 Number of Queues 00:07:50.759 ================ 00:07:50.759 Number of I/O Submission Queues: 64 00:07:50.759 Number of I/O Completion Queues: 64 00:07:50.759 00:07:50.759 ZNS Specific Controller Data 00:07:50.759 ============================ 00:07:50.759 Zone Append Size Limit: 0 00:07:50.759 00:07:50.759 00:07:50.759 Active Namespaces 00:07:50.759 ================= 00:07:50.759 Namespace ID:1 00:07:50.759 Error Recovery Timeout: Unlimited 00:07:50.759 Command Set Identifier: NVM (00h) 00:07:50.759 Deallocate: Supported 00:07:50.759 Deallocated/Unwritten Error: Supported 00:07:50.759 Deallocated Read Value: All 0x00 00:07:50.759 Deallocate in Write Zeroes: Not Supported 00:07:50.759 Deallocated Guard Field: 0xFFFF 00:07:50.759 Flush: Supported 00:07:50.759 Reservation: Not Supported 00:07:50.759 Namespace Sharing Capabilities: Multiple Controllers 00:07:50.759 Size (in LBAs): 262144 (1GiB) 00:07:50.759 Capacity (in LBAs): 262144 (1GiB) 00:07:50.759 Utilization (in LBAs): 262144 (1GiB) 00:07:50.759 Thin Provisioning: Not Supported 00:07:50.759 Per-NS Atomic Units: No 00:07:50.759 Maximum Single Source Range Length: 128 00:07:50.759 Maximum Copy Length: 128 00:07:50.759 Maximum Source Range Count: 128 00:07:50.759 NGUID/EUI64 Never Reused: No 00:07:50.759 Namespace Write Protected: No 00:07:50.759 Endurance group ID: 1 00:07:50.759 Number of LBA Formats: 8 00:07:50.759 Current LBA Format: LBA Format #04 00:07:50.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.759 00:07:50.759 Get Feature FDP: 00:07:50.759 ================ 00:07:50.759 Enabled: Yes 00:07:50.759 FDP configuration index: 0 00:07:50.759 00:07:50.759 FDP configurations log page 00:07:50.759 =========================== 00:07:50.759 Number of FDP configurations: 1 00:07:50.759 Version: 0 00:07:50.759 Size: 112 00:07:50.759 FDP Configuration Descriptor: 0 00:07:50.759 Descriptor Size: 96 00:07:50.759 Reclaim Group Identifier format: 2 00:07:50.759 FDP Volatile Write Cache: Not Present 00:07:50.759 FDP Configuration: Valid 00:07:50.759 Vendor Specific Size: 0 00:07:50.759 Number of Reclaim Groups: 2 00:07:50.759 Number of Recalim Unit Handles: 8 00:07:50.759 Max Placement Identifiers: 128 00:07:50.759 Number of Namespaces Suppprted: 256 00:07:50.759 Reclaim unit Nominal Size: 6000000 bytes 00:07:50.759 Estimated Reclaim Unit Time Limit: Not Reported 00:07:50.759 RUH Desc #000: RUH Type: Initially Isolated 00:07:50.759 RUH Desc #001: RUH Type: Initially Isolated 00:07:50.759 RUH Desc #002: RUH Type: Initially Isolated 00:07:50.759 RUH Desc #003: RUH Type: Initially Isolated 00:07:50.759 RUH Desc #004: RUH Type: Initially Isolated 00:07:50.759 RUH Desc #005: RUH Type: Initially Isolated 00:07:50.759 RUH Desc #006: RUH Type: Initially Isolated 00:07:50.759 RUH Desc #007: RUH Type: Initially Isolated 00:07:50.759 00:07:50.759 FDP reclaim unit handle usage log page 00:07:50.759 ====================================== 00:07:50.759 Number of Reclaim Unit Handles: 8 00:07:50.759 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:50.759 RUH Usage Desc #001: RUH Attributes: Unused 00:07:50.759 RUH Usage Desc #002: RUH Attributes: Unused 00:07:50.759 RUH Usage Desc #003: RUH Attributes: Unused 00:07:50.759 RUH Usage Desc #004: RUH Attributes: Unused 00:07:50.759 RUH Usage Desc #005: RUH Attributes: Unused 00:07:50.759 RUH Usage Desc #006: RUH Attributes: Unused 00:07:50.759 RUH Usage Desc #007: RUH Attributes: Unused 00:07:50.759 00:07:50.759 FDP statistics log page 00:07:50.759 ======================= 00:07:50.759 Host bytes with metadata written: 498696192 00:07:50.759 Media bytes with metadata written: 498774016 00:07:50.759 Media bytes erased: 0 00:07:50.759 00:07:50.759 FDP events log page 00:07:50.759 =================== 00:07:50.759 Number of FDP events: 0 00:07:50.759 00:07:50.759 NVM Specific Namespace Data 00:07:50.759 =========================== 00:07:50.759 Logical Block Storage Tag Mask: 0 00:07:50.759 Protection Information Capabilities: 00:07:50.759 16b Guard Protection Information Storage Tag Support: No 00:07:50.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.759 Storage Tag Check Read Support: No 00:07:50.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.759 ===================================================== 00:07:50.760 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.760 ===================================================== 00:07:50.760 Controller Capabilities/Features 00:07:50.760 ================================ 00:07:50.760 Vendor ID: 1b36 00:07:50.760 Subsystem Vendor ID: 1af4 00:07:50.760 Serial Number: 12340 00:07:50.760 Model Number: QEMU NVMe Ctrl 00:07:50.760 Firmware Version: 8.0.0 00:07:50.760 Recommended Arb Burst: 6 00:07:50.760 IEEE OUI Identifier: 00 54 52 00:07:50.760 Multi-path I/O 00:07:50.760 May have multiple subsystem ports: No 00:07:50.760 May have multiple controllers: No 00:07:50.760 Associated with SR-IOV VF: No 00:07:50.760 Max Data Transfer Size: 524288 00:07:50.760 Max Number of Namespaces: 256 00:07:50.760 Max Number of I/O Queues: 64 00:07:50.760 NVMe Specification Version (VS): 1.4 00:07:50.760 NVMe Specification Version (Identify): 1.4 00:07:50.760 Maximum Queue Entries: 2048 00:07:50.760 Contiguous Queues Required: Yes 00:07:50.760 Arbitration Mechanisms Supported 00:07:50.760 Weighted Round Robin: Not Supported 00:07:50.760 Vendor Specific: Not Supported 00:07:50.760 Reset Timeout: 7500 ms 00:07:50.760 Doorbell Stride: 4 bytes 00:07:50.760 NVM Subsystem Reset: Not Supported 00:07:50.760 Command Sets Supported 00:07:50.760 NVM Command Set: Supported 00:07:50.760 Boot Partition: Not Supported 00:07:50.760 Memory Page Size Minimum: 4096 bytes 00:07:50.760 Memory Page Size Maximum: 65536 bytes 00:07:50.760 Persistent Memory Region: Not Supported 00:07:50.760 Optional Asynchronous Events Supported 00:07:50.760 Namespace Attribute Notices: Supported 00:07:50.760 Firmware Activation Notices: Not Supported 00:07:50.760 ANA Change Notices: Not Supported 00:07:50.760 PLE Aggregate Log Change Notices: Not Supported 00:07:50.760 LBA Status Info Alert Notices: Not Supported 00:07:50.760 EGE Aggregate Log Change Notices: Not Supported 00:07:50.760 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.760 Zone Descriptor Change Notices: Not Supported 00:07:50.760 Discovery Log Change Notices: Not Supported 00:07:50.760 Controller Attributes 00:07:50.760 128-bit Host Identifier: Not Supported 00:07:50.760 Non-Operational Permissive Mode: Not Supported 00:07:50.760 NVM Sets: Not Supported 00:07:50.760 Read Recovery Levels: Not Supported 00:07:50.760 Endurance Groups: Not Supported 00:07:50.760 Predictable Latency Mode: Not Supported 00:07:50.760 Traffic Based Keep ALive: Not Supported 00:07:50.760 Namespace Granularity: Not Supported 00:07:50.760 SQ Associations: Not Supported 00:07:50.760 UUID List: Not Supported 00:07:50.760 Multi-Domain Subsystem: Not Supported 00:07:50.760 Fixed Capacity Management: Not Supported 00:07:50.760 Variable Capacity Management: Not Supported 00:07:50.760 Delete Endurance Group: Not Supported 00:07:50.760 Delete NVM Set: Not Supported 00:07:50.760 Extended LBA Formats Supported: Supported 00:07:50.760 Flexible Data Placement Supported: Not Supported 00:07:50.760 00:07:50.760 Controller Memory Buffer Support 00:07:50.760 ================================ 00:07:50.760 Supported: No 00:07:50.760 00:07:50.760 Persistent Memory Region Support 00:07:50.760 ================================ 00:07:50.760 Supported: No 00:07:50.760 00:07:50.760 Admin Command Set Attributes 00:07:50.760 ============================ 00:07:50.760 Security Send/Receive: Not Supported 00:07:50.760 Format NVM: Supported 00:07:50.760 Firmware Activate/Download: Not Supported 00:07:50.760 Namespace Management: Supported 00:07:50.760 Device Self-Test: Not Supported 00:07:50.760 Directives: Supported 00:07:50.760 NVMe-MI: Not Supported 00:07:50.760 Virtualization Management: Not Supported 00:07:50.760 Doorbell Buffer Config: Supported 00:07:50.760 Get LBA Status Capability: Not Supported 00:07:50.760 Command & Feature Lockdown Capability: Not Supported 00:07:50.760 Abort Command Limit: 4 00:07:50.760 Async Event Request Limit: 4 00:07:50.760 Number of Firmware Slots: N/A 00:07:50.760 Firmware Slot 1 Read-Only: N/A 00:07:50.760 Firmware Activation Without Reset: N/A 00:07:50.760 Multiple Update Detection Support: N/A 00:07:50.760 Firmware Update Granularity: No Information Provided 00:07:50.760 Per-Namespace SMART Log: Yes 00:07:50.760 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.760 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:50.760 Command Effects Log Page: Supported 00:07:50.760 Get Log Page Extended Data: Supported 00:07:50.760 Telemetry Log Pages: Not Supported 00:07:50.760 Persistent Event Log Pages: Not Supported 00:07:50.760 Supported Log Pages Log Page: May Support 00:07:50.760 Commands Supported & Effects Log Page: Not Supported 00:07:50.760 Feature Identifiers & Effects Log Page:May Support 00:07:50.760 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.760 Data Area 4 for Telemetry Log: Not Supported 00:07:50.760 Error Log Page Entries Supported: 1 00:07:50.760 Keep Alive: Not Supported 00:07:50.760 00:07:50.760 NVM Command Set Attributes 00:07:50.760 ========================== 00:07:50.760 Submission Queue Entry Size 00:07:50.760 Max: 64 00:07:50.760 Min: 64 00:07:50.760 Completion Queue Entry Size 00:07:50.760 Max: 16 00:07:50.760 Min: 16 00:07:50.760 Number of Namespaces: 256 00:07:50.760 Compare Command: Supported 00:07:50.760 Write Uncorrectable Command: Not Supported 00:07:50.760 Dataset Management Command: Supported 00:07:50.760 Write Zeroes Command: Supported 00:07:50.760 Set Features Save Field: Supported 00:07:50.760 Reservations: Not Supported 00:07:50.760 Timestamp: Supported 00:07:50.760 Copy: Supported 00:07:50.760 Volatile Write Cache: Present 00:07:50.760 Atomic Write Unit (Normal): 1 00:07:50.760 Atomic Write Unit (PFail): 1 00:07:50.760 Atomic Compare & Write Unit: 1 00:07:50.760 Fused Compare & Write: Not Supported 00:07:50.760 Scatter-Gather List 00:07:50.760 SGL Command Set: Supported 00:07:50.760 SGL Keyed: Not Supported 00:07:50.760 SGL Bit Bucket Descriptor: Not Supported 00:07:50.760 SGL Metadata Pointer: Not Supported 00:07:50.760 Oversized SGL: Not Supported 00:07:50.760 SGL Metadata Address: Not Supported 00:07:50.760 SGL Offset: Not Supported 00:07:50.760 Transport SGL Data Block: Not Supported 00:07:50.760 Replay Protected Memory Block: Not Supported 00:07:50.760 00:07:50.760 Firmware Slot Information 00:07:50.760 ========================= 00:07:50.760 Active slot: 1 00:07:50.760 Slot 1 Firmware Revision: 1.0 00:07:50.760 00:07:50.760 00:07:50.760 Commands Supported and Effects 00:07:50.760 ============================== 00:07:50.760 Admin Commands 00:07:50.760 -------------- 00:07:50.760 Delete I/O Submission Queue (00h): Supported 00:07:50.760 Create I/O Submission Queue (01h): Supported 00:07:50.760 Get Log Page (02h): Supported 00:07:50.760 Delete I/O Completion Queue (0[2024-10-01 06:00:16.289830] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 74711 terminated unexpected 00:07:50.760 4h): Supported 00:07:50.760 Create I/O Completion Queue (05h): Supported 00:07:50.760 Identify (06h): Supported 00:07:50.760 Abort (08h): Supported 00:07:50.760 Set Features (09h): Supported 00:07:50.760 Get Features (0Ah): Supported 00:07:50.760 Asynchronous Event Request (0Ch): Supported 00:07:50.760 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.760 Directive Send (19h): Supported 00:07:50.760 Directive Receive (1Ah): Supported 00:07:50.760 Virtualization Management (1Ch): Supported 00:07:50.760 Doorbell Buffer Config (7Ch): Supported 00:07:50.760 Format NVM (80h): Supported LBA-Change 00:07:50.760 I/O Commands 00:07:50.760 ------------ 00:07:50.760 Flush (00h): Supported LBA-Change 00:07:50.760 Write (01h): Supported LBA-Change 00:07:50.760 Read (02h): Supported 00:07:50.760 Compare (05h): Supported 00:07:50.760 Write Zeroes (08h): Supported LBA-Change 00:07:50.760 Dataset Management (09h): Supported LBA-Change 00:07:50.761 Unknown (0Ch): Supported 00:07:50.761 Unknown (12h): Supported 00:07:50.761 Copy (19h): Supported LBA-Change 00:07:50.761 Unknown (1Dh): Supported LBA-Change 00:07:50.761 00:07:50.761 Error Log 00:07:50.761 ========= 00:07:50.761 00:07:50.761 Arbitration 00:07:50.761 =========== 00:07:50.761 Arbitration Burst: no limit 00:07:50.761 00:07:50.761 Power Management 00:07:50.761 ================ 00:07:50.761 Number of Power States: 1 00:07:50.761 Current Power State: Power State #0 00:07:50.761 Power State #0: 00:07:50.761 Max Power: 25.00 W 00:07:50.761 Non-Operational State: Operational 00:07:50.761 Entry Latency: 16 microseconds 00:07:50.761 Exit Latency: 4 microseconds 00:07:50.761 Relative Read Throughput: 0 00:07:50.761 Relative Read Latency: 0 00:07:50.761 Relative Write Throughput: 0 00:07:50.761 Relative Write Latency: 0 00:07:50.761 Idle Power: Not Reported 00:07:50.761 Active Power: Not Reported 00:07:50.761 Non-Operational Permissive Mode: Not Supported 00:07:50.761 00:07:50.761 Health Information 00:07:50.761 ================== 00:07:50.761 Critical Warnings: 00:07:50.761 Available Spare Space: OK 00:07:50.761 Temperature: OK 00:07:50.761 Device Reliability: OK 00:07:50.761 Read Only: No 00:07:50.761 Volatile Memory Backup: OK 00:07:50.761 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.761 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.761 Available Spare: 0% 00:07:50.761 Available Spare Threshold: 0% 00:07:50.761 Life Percentage Used: 0% 00:07:50.761 Data Units Read: 686 00:07:50.761 Data Units Written: 615 00:07:50.761 Host Read Commands: 37482 00:07:50.761 Host Write Commands: 37268 00:07:50.761 Controller Busy Time: 0 minutes 00:07:50.761 Power Cycles: 0 00:07:50.761 Power On Hours: 0 hours 00:07:50.761 Unsafe Shutdowns: 0 00:07:50.761 Unrecoverable Media Errors: 0 00:07:50.761 Lifetime Error Log Entries: 0 00:07:50.761 Warning Temperature Time: 0 minutes 00:07:50.761 Critical Temperature Time: 0 minutes 00:07:50.761 00:07:50.761 Number of Queues 00:07:50.761 ================ 00:07:50.761 Number of I/O Submission Queues: 64 00:07:50.761 Number of I/O Completion Queues: 64 00:07:50.761 00:07:50.761 ZNS Specific Controller Data 00:07:50.761 ============================ 00:07:50.761 Zone Append Size Limit: 0 00:07:50.761 00:07:50.761 00:07:50.761 Active Namespaces 00:07:50.761 ================= 00:07:50.761 Namespace ID:1 00:07:50.761 Error Recovery Timeout: Unlimited 00:07:50.761 Command Set Identifier: NVM (00h) 00:07:50.761 Deallocate: Supported 00:07:50.761 Deallocated/Unwritten Error: Supported 00:07:50.761 Deallocated Read Value: All 0x00 00:07:50.761 Deallocate in Write Zeroes: Not Supported 00:07:50.761 Deallocated Guard Field: 0xFFFF 00:07:50.761 Flush: Supported 00:07:50.761 Reservation: Not Supported 00:07:50.761 Metadata Transferred as: Separate Metadata Buffer 00:07:50.761 Namespace Sharing Capabilities: Private 00:07:50.761 Size (in LBAs): 1548666 (5GiB) 00:07:50.761 Capacity (in LBAs): 1548666 (5GiB) 00:07:50.761 Utilization (in LBAs): 1548666 (5GiB) 00:07:50.761 Thin Provisioning: Not Supported 00:07:50.761 Per-NS Atomic Units: No 00:07:50.761 Maximum Single Source Range Length: 128 00:07:50.761 Maximum Copy Length: 128 00:07:50.761 Maximum Source Range Count: 128 00:07:50.761 NGUID/EUI64 Never Reused: No 00:07:50.761 Namespace Write Protected: No 00:07:50.761 Number of LBA Formats: 8 00:07:50.761 Current LBA Format: [2024-10-01 06:00:16.291288] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 74711 terminated unexpected 00:07:50.761 LBA Format #07 00:07:50.761 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.761 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.761 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.761 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.761 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.761 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.761 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.761 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.761 00:07:50.761 NVM Specific Namespace Data 00:07:50.761 =========================== 00:07:50.761 Logical Block Storage Tag Mask: 0 00:07:50.761 Protection Information Capabilities: 00:07:50.761 16b Guard Protection Information Storage Tag Support: No 00:07:50.761 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.761 Storage Tag Check Read Support: No 00:07:50.761 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.761 ===================================================== 00:07:50.761 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.761 ===================================================== 00:07:50.761 Controller Capabilities/Features 00:07:50.761 ================================ 00:07:50.761 Vendor ID: 1b36 00:07:50.761 Subsystem Vendor ID: 1af4 00:07:50.761 Serial Number: 12341 00:07:50.761 Model Number: QEMU NVMe Ctrl 00:07:50.761 Firmware Version: 8.0.0 00:07:50.761 Recommended Arb Burst: 6 00:07:50.761 IEEE OUI Identifier: 00 54 52 00:07:50.761 Multi-path I/O 00:07:50.761 May have multiple subsystem ports: No 00:07:50.761 May have multiple controllers: No 00:07:50.761 Associated with SR-IOV VF: No 00:07:50.761 Max Data Transfer Size: 524288 00:07:50.761 Max Number of Namespaces: 256 00:07:50.761 Max Number of I/O Queues: 64 00:07:50.761 NVMe Specification Version (VS): 1.4 00:07:50.761 NVMe Specification Version (Identify): 1.4 00:07:50.761 Maximum Queue Entries: 2048 00:07:50.761 Contiguous Queues Required: Yes 00:07:50.761 Arbitration Mechanisms Supported 00:07:50.761 Weighted Round Robin: Not Supported 00:07:50.761 Vendor Specific: Not Supported 00:07:50.761 Reset Timeout: 7500 ms 00:07:50.761 Doorbell Stride: 4 bytes 00:07:50.761 NVM Subsystem Reset: Not Supported 00:07:50.761 Command Sets Supported 00:07:50.761 NVM Command Set: Supported 00:07:50.761 Boot Partition: Not Supported 00:07:50.761 Memory Page Size Minimum: 4096 bytes 00:07:50.761 Memory Page Size Maximum: 65536 bytes 00:07:50.761 Persistent Memory Region: Not Supported 00:07:50.761 Optional Asynchronous Events Supported 00:07:50.761 Namespace Attribute Notices: Supported 00:07:50.761 Firmware Activation Notices: Not Supported 00:07:50.761 ANA Change Notices: Not Supported 00:07:50.761 PLE Aggregate Log Change Notices: Not Supported 00:07:50.761 LBA Status Info Alert Notices: Not Supported 00:07:50.761 EGE Aggregate Log Change Notices: Not Supported 00:07:50.761 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.761 Zone Descriptor Change Notices: Not Supported 00:07:50.761 Discovery Log Change Notices: Not Supported 00:07:50.761 Controller Attributes 00:07:50.761 128-bit Host Identifier: Not Supported 00:07:50.761 Non-Operational Permissive Mode: Not Supported 00:07:50.761 NVM Sets: Not Supported 00:07:50.761 Read Recovery Levels: Not Supported 00:07:50.761 Endurance Groups: Not Supported 00:07:50.761 Predictable Latency Mode: Not Supported 00:07:50.761 Traffic Based Keep ALive: Not Supported 00:07:50.761 Namespace Granularity: Not Supported 00:07:50.761 SQ Associations: Not Supported 00:07:50.761 UUID List: Not Supported 00:07:50.761 Multi-Domain Subsystem: Not Supported 00:07:50.761 Fixed Capacity Management: Not Supported 00:07:50.761 Variable Capacity Management: Not Supported 00:07:50.762 Delete Endurance Group: Not Supported 00:07:50.762 Delete NVM Set: Not Supported 00:07:50.762 Extended LBA Formats Supported: Supported 00:07:50.762 Flexible Data Placement Supported: Not Supported 00:07:50.762 00:07:50.762 Controller Memory Buffer Support 00:07:50.762 ================================ 00:07:50.762 Supported: No 00:07:50.762 00:07:50.762 Persistent Memory Region Support 00:07:50.762 ================================ 00:07:50.762 Supported: No 00:07:50.762 00:07:50.762 Admin Command Set Attributes 00:07:50.762 ============================ 00:07:50.762 Security Send/Receive: Not Supported 00:07:50.762 Format NVM: Supported 00:07:50.762 Firmware Activate/Download: Not Supported 00:07:50.762 Namespace Management: Supported 00:07:50.762 Device Self-Test: Not Supported 00:07:50.762 Directives: Supported 00:07:50.762 NVMe-MI: Not Supported 00:07:50.762 Virtualization Management: Not Supported 00:07:50.762 Doorbell Buffer Config: Supported 00:07:50.762 Get LBA Status Capability: Not Supported 00:07:50.762 Command & Feature Lockdown Capability: Not Supported 00:07:50.762 Abort Command Limit: 4 00:07:50.762 Async Event Request Limit: 4 00:07:50.762 Number of Firmware Slots: N/A 00:07:50.762 Firmware Slot 1 Read-Only: N/A 00:07:50.762 Firmware Activation Without Reset: N/A 00:07:50.762 Multiple Update Detection Support: N/A 00:07:50.762 Firmware Update Granularity: No Information Provided 00:07:50.762 Per-Namespace SMART Log: Yes 00:07:50.762 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.762 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:50.762 Command Effects Log Page: Supported 00:07:50.762 Get Log Page Extended Data: Supported 00:07:50.762 Telemetry Log Pages: Not Supported 00:07:50.762 Persistent Event Log Pages: Not Supported 00:07:50.762 Supported Log Pages Log Page: May Support 00:07:50.762 Commands Supported & Effects Log Page: Not Supported 00:07:50.762 Feature Identifiers & Effects Log Page:May Support 00:07:50.762 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.762 Data Area 4 for Telemetry Log: Not Supported 00:07:50.762 Error Log Page Entries Supported: 1 00:07:50.762 Keep Alive: Not Supported 00:07:50.762 00:07:50.762 NVM Command Set Attributes 00:07:50.762 ========================== 00:07:50.762 Submission Queue Entry Size 00:07:50.762 Max: 64 00:07:50.762 Min: 64 00:07:50.762 Completion Queue Entry Size 00:07:50.762 Max: 16 00:07:50.762 Min: 16 00:07:50.762 Number of Namespaces: 256 00:07:50.762 Compare Command: Supported 00:07:50.762 Write Uncorrectable Command: Not Supported 00:07:50.762 Dataset Management Command: Supported 00:07:50.762 Write Zeroes Command: Supported 00:07:50.762 Set Features Save Field: Supported 00:07:50.762 Reservations: Not Supported 00:07:50.762 Timestamp: Supported 00:07:50.762 Copy: Supported 00:07:50.762 Volatile Write Cache: Present 00:07:50.762 Atomic Write Unit (Normal): 1 00:07:50.762 Atomic Write Unit (PFail): 1 00:07:50.762 Atomic Compare & Write Unit: 1 00:07:50.762 Fused Compare & Write: Not Supported 00:07:50.762 Scatter-Gather List 00:07:50.762 SGL Command Set: Supported 00:07:50.762 SGL Keyed: Not Supported 00:07:50.762 SGL Bit Bucket Descriptor: Not Supported 00:07:50.762 SGL Metadata Pointer: Not Supported 00:07:50.762 Oversized SGL: Not Supported 00:07:50.762 SGL Metadata Address: Not Supported 00:07:50.762 SGL Offset: Not Supported 00:07:50.762 Transport SGL Data Block: Not Supported 00:07:50.762 Replay Protected Memory Block: Not Supported 00:07:50.762 00:07:50.762 Firmware Slot Information 00:07:50.762 ========================= 00:07:50.762 Active slot: 1 00:07:50.762 Slot 1 Firmware Revision: 1.0 00:07:50.762 00:07:50.762 00:07:50.762 Commands Supported and Effects 00:07:50.762 ============================== 00:07:50.762 Admin Commands 00:07:50.762 -------------- 00:07:50.762 Delete I/O Submission Queue (00h): Supported 00:07:50.762 Create I/O Submission Queue (01h): Supported 00:07:50.762 Get Log Page (02h): Supported 00:07:50.762 Delete I/O Completion Queue (04h): Supported 00:07:50.762 Create I/O Completion Queue (05h): Supported 00:07:50.762 Identify (06h): Supported 00:07:50.762 Abort (08h): Supported 00:07:50.762 Set Features (09h): Supported 00:07:50.762 Get Features (0Ah): Supported 00:07:50.762 Asynchronous Event Request (0Ch): Supported 00:07:50.762 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.762 Directive Send (19h): Supported 00:07:50.762 Directive Receive (1Ah): Supported 00:07:50.762 Virtualization Management (1Ch): Supported 00:07:50.762 Doorbell Buffer Config (7Ch): Supported 00:07:50.762 Format NVM (80h): Supported LBA-Change 00:07:50.762 I/O Commands 00:07:50.762 ------------ 00:07:50.762 Flush (00h): Supported LBA-Change 00:07:50.762 Write (01h): Supported LBA-Change 00:07:50.762 Read (02h): Supported 00:07:50.762 Compare (05h): Supported 00:07:50.762 Write Zeroes (08h): Supported LBA-Change 00:07:50.762 Dataset Management (09h): Supported LBA-Change 00:07:50.762 Unknown (0Ch): Supported 00:07:50.762 Unknown (12h): Supported 00:07:50.762 Copy (19h): Supported LBA-Change 00:07:50.762 Unknown (1Dh): Supported LBA-Change 00:07:50.762 00:07:50.762 Error Log 00:07:50.762 ========= 00:07:50.762 00:07:50.762 Arbitration 00:07:50.762 =========== 00:07:50.762 Arbitration Burst: no limit 00:07:50.762 00:07:50.762 Power Management 00:07:50.762 ================ 00:07:50.762 Number of Power States: 1 00:07:50.762 Current Power State: Power State #0 00:07:50.762 Power State #0: 00:07:50.762 Max Power: 25.00 W 00:07:50.762 Non-Operational State: Operational 00:07:50.762 Entry Latency: 16 microseconds 00:07:50.762 Exit Latency: 4 microseconds 00:07:50.762 Relative Read Throughput: 0 00:07:50.762 Relative Read Latency: 0 00:07:50.762 Relative Write Throughput: 0 00:07:50.762 Relative Write Latency: 0 00:07:50.762 Idle Power: Not Reported 00:07:50.762 Active Power: Not Reported 00:07:50.762 Non-Operational Permissive Mode: Not Supported 00:07:50.762 00:07:50.762 Health Information 00:07:50.762 ================== 00:07:50.762 Critical Warnings: 00:07:50.762 Available Spare Space: OK 00:07:50.762 Temperature: OK 00:07:50.762 Device Reliability: OK 00:07:50.762 Read Only: No 00:07:50.762 Volatile Memory Backup: OK 00:07:50.762 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.762 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.762 Available Spare: 0% 00:07:50.762 Available Spare Threshold: 0% 00:07:50.762 Life Percentage Used: 0% 00:07:50.762 Data Units Read: 1080 00:07:50.762 Data Units Written: 946 00:07:50.762 Host Read Commands: 55353 00:07:50.762 Host Write Commands: 54140 00:07:50.762 Controller Busy Time: 0 minutes 00:07:50.762 Power Cycles: 0 00:07:50.762 Power On Hours: 0 hours 00:07:50.762 Unsafe Shutdowns: 0 00:07:50.762 Unrecoverable Media Errors: 0 00:07:50.762 Lifetime Error Log Entries: 0 00:07:50.762 Warning Temperature Time: 0 minutes 00:07:50.762 Critical Temperature Time: 0 minutes 00:07:50.762 00:07:50.762 Number of Queues 00:07:50.762 ================ 00:07:50.762 Number of I/O Submission Queues: 64 00:07:50.762 Number of I/O Completion Queues: 64 00:07:50.762 00:07:50.762 ZNS Specific Controller Data 00:07:50.762 ============================ 00:07:50.762 Zone Append Size Limit: 0 00:07:50.762 00:07:50.762 00:07:50.762 Active Namespaces 00:07:50.762 ================= 00:07:50.762 Namespace ID:1 00:07:50.762 Error Recovery Timeout: Unlimited 00:07:50.762 Command Set Identifier: NVM (00h) 00:07:50.762 Deallocate: Supported 00:07:50.762 Deallocated/Unwritten Error: Supported 00:07:50.762 Deallocated Read Value: All 0x00 00:07:50.762 Deallocate in Write Zeroes: Not Supported 00:07:50.763 Deallocated Guard Field: 0xFFFF 00:07:50.763 Flush: Supported 00:07:50.763 Reservation: Not Supported 00:07:50.763 Namespace Sharing Capabilities: Private 00:07:50.763 Size (in LBAs): 1310720 (5GiB) 00:07:50.763 Capacity (in LBAs): 1310720 (5GiB) 00:07:50.763 Utilization (in LBAs): 1310720 (5GiB) 00:07:50.763 Thin Provisioning: Not Supported 00:07:50.763 Per-NS Atomic Units: No 00:07:50.763 Maximum Single Source Range Length: 128 00:07:50.763 Maximum Copy Length: 128 00:07:50.763 Maximum Source Range Count: 128 00:07:50.763 NGUID/EUI64 Never Reused: No 00:07:50.763 Namespace Write Protected: No 00:07:50.763 Number of LBA Formats: 8 00:07:50.763 Current LBA Format: LBA Format #04 00:07:50.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.763 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.763 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.763 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.763 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.763 LBA Forma[2024-10-01 06:00:16.293398] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 74711 terminated unexpected 00:07:50.763 t #05: Data Size: 4096 Metadata Size: 8 00:07:50.763 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.763 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.763 00:07:50.763 NVM Specific Namespace Data 00:07:50.763 =========================== 00:07:50.763 Logical Block Storage Tag Mask: 0 00:07:50.763 Protection Information Capabilities: 00:07:50.763 16b Guard Protection Information Storage Tag Support: No 00:07:50.763 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.763 Storage Tag Check Read Support: No 00:07:50.763 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.763 ===================================================== 00:07:50.763 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.763 ===================================================== 00:07:50.763 Controller Capabilities/Features 00:07:50.763 ================================ 00:07:50.763 Vendor ID: 1b36 00:07:50.763 Subsystem Vendor ID: 1af4 00:07:50.763 Serial Number: 12342 00:07:50.763 Model Number: QEMU NVMe Ctrl 00:07:50.763 Firmware Version: 8.0.0 00:07:50.763 Recommended Arb Burst: 6 00:07:50.763 IEEE OUI Identifier: 00 54 52 00:07:50.763 Multi-path I/O 00:07:50.763 May have multiple subsystem ports: No 00:07:50.763 May have multiple controllers: No 00:07:50.763 Associated with SR-IOV VF: No 00:07:50.763 Max Data Transfer Size: 524288 00:07:50.763 Max Number of Namespaces: 256 00:07:50.763 Max Number of I/O Queues: 64 00:07:50.763 NVMe Specification Version (VS): 1.4 00:07:50.763 NVMe Specification Version (Identify): 1.4 00:07:50.763 Maximum Queue Entries: 2048 00:07:50.763 Contiguous Queues Required: Yes 00:07:50.763 Arbitration Mechanisms Supported 00:07:50.763 Weighted Round Robin: Not Supported 00:07:50.763 Vendor Specific: Not Supported 00:07:50.763 Reset Timeout: 7500 ms 00:07:50.763 Doorbell Stride: 4 bytes 00:07:50.763 NVM Subsystem Reset: Not Supported 00:07:50.763 Command Sets Supported 00:07:50.763 NVM Command Set: Supported 00:07:50.763 Boot Partition: Not Supported 00:07:50.763 Memory Page Size Minimum: 4096 bytes 00:07:50.763 Memory Page Size Maximum: 65536 bytes 00:07:50.763 Persistent Memory Region: Not Supported 00:07:50.763 Optional Asynchronous Events Supported 00:07:50.763 Namespace Attribute Notices: Supported 00:07:50.763 Firmware Activation Notices: Not Supported 00:07:50.763 ANA Change Notices: Not Supported 00:07:50.763 PLE Aggregate Log Change Notices: Not Supported 00:07:50.763 LBA Status Info Alert Notices: Not Supported 00:07:50.763 EGE Aggregate Log Change Notices: Not Supported 00:07:50.763 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.763 Zone Descriptor Change Notices: Not Supported 00:07:50.763 Discovery Log Change Notices: Not Supported 00:07:50.763 Controller Attributes 00:07:50.763 128-bit Host Identifier: Not Supported 00:07:50.763 Non-Operational Permissive Mode: Not Supported 00:07:50.763 NVM Sets: Not Supported 00:07:50.763 Read Recovery Levels: Not Supported 00:07:50.763 Endurance Groups: Not Supported 00:07:50.763 Predictable Latency Mode: Not Supported 00:07:50.763 Traffic Based Keep ALive: Not Supported 00:07:50.763 Namespace Granularity: Not Supported 00:07:50.763 SQ Associations: Not Supported 00:07:50.763 UUID List: Not Supported 00:07:50.763 Multi-Domain Subsystem: Not Supported 00:07:50.763 Fixed Capacity Management: Not Supported 00:07:50.763 Variable Capacity Management: Not Supported 00:07:50.763 Delete Endurance Group: Not Supported 00:07:50.763 Delete NVM Set: Not Supported 00:07:50.763 Extended LBA Formats Supported: Supported 00:07:50.763 Flexible Data Placement Supported: Not Supported 00:07:50.763 00:07:50.763 Controller Memory Buffer Support 00:07:50.763 ================================ 00:07:50.763 Supported: No 00:07:50.763 00:07:50.763 Persistent Memory Region Support 00:07:50.763 ================================ 00:07:50.763 Supported: No 00:07:50.763 00:07:50.763 Admin Command Set Attributes 00:07:50.763 ============================ 00:07:50.763 Security Send/Receive: Not Supported 00:07:50.763 Format NVM: Supported 00:07:50.763 Firmware Activate/Download: Not Supported 00:07:50.763 Namespace Management: Supported 00:07:50.763 Device Self-Test: Not Supported 00:07:50.763 Directives: Supported 00:07:50.763 NVMe-MI: Not Supported 00:07:50.763 Virtualization Management: Not Supported 00:07:50.763 Doorbell Buffer Config: Supported 00:07:50.763 Get LBA Status Capability: Not Supported 00:07:50.763 Command & Feature Lockdown Capability: Not Supported 00:07:50.763 Abort Command Limit: 4 00:07:50.763 Async Event Request Limit: 4 00:07:50.763 Number of Firmware Slots: N/A 00:07:50.763 Firmware Slot 1 Read-Only: N/A 00:07:50.763 Firmware Activation Without Reset: N/A 00:07:50.763 Multiple Update Detection Support: N/A 00:07:50.763 Firmware Update Granularity: No Information Provided 00:07:50.763 Per-Namespace SMART Log: Yes 00:07:50.763 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.763 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:50.763 Command Effects Log Page: Supported 00:07:50.763 Get Log Page Extended Data: Supported 00:07:50.764 Telemetry Log Pages: Not Supported 00:07:50.764 Persistent Event Log Pages: Not Supported 00:07:50.764 Supported Log Pages Log Page: May Support 00:07:50.764 Commands Supported & Effects Log Page: Not Supported 00:07:50.764 Feature Identifiers & Effects Log Page:May Support 00:07:50.764 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.764 Data Area 4 for Telemetry Log: Not Supported 00:07:50.764 Error Log Page Entries Supported: 1 00:07:50.764 Keep Alive: Not Supported 00:07:50.764 00:07:50.764 NVM Command Set Attributes 00:07:50.764 ========================== 00:07:50.764 Submission Queue Entry Size 00:07:50.764 Max: 64 00:07:50.764 Min: 64 00:07:50.764 Completion Queue Entry Size 00:07:50.764 Max: 16 00:07:50.764 Min: 16 00:07:50.764 Number of Namespaces: 256 00:07:50.764 Compare Command: Supported 00:07:50.764 Write Uncorrectable Command: Not Supported 00:07:50.764 Dataset Management Command: Supported 00:07:50.764 Write Zeroes Command: Supported 00:07:50.764 Set Features Save Field: Supported 00:07:50.764 Reservations: Not Supported 00:07:50.764 Timestamp: Supported 00:07:50.764 Copy: Supported 00:07:50.764 Volatile Write Cache: Present 00:07:50.764 Atomic Write Unit (Normal): 1 00:07:50.764 Atomic Write Unit (PFail): 1 00:07:50.764 Atomic Compare & Write Unit: 1 00:07:50.764 Fused Compare & Write: Not Supported 00:07:50.764 Scatter-Gather List 00:07:50.764 SGL Command Set: Supported 00:07:50.764 SGL Keyed: Not Supported 00:07:50.764 SGL Bit Bucket Descriptor: Not Supported 00:07:50.764 SGL Metadata Pointer: Not Supported 00:07:50.764 Oversized SGL: Not Supported 00:07:50.764 SGL Metadata Address: Not Supported 00:07:50.764 SGL Offset: Not Supported 00:07:50.764 Transport SGL Data Block: Not Supported 00:07:50.764 Replay Protected Memory Block: Not Supported 00:07:50.764 00:07:50.764 Firmware Slot Information 00:07:50.764 ========================= 00:07:50.764 Active slot: 1 00:07:50.764 Slot 1 Firmware Revision: 1.0 00:07:50.764 00:07:50.764 00:07:50.764 Commands Supported and Effects 00:07:50.764 ============================== 00:07:50.764 Admin Commands 00:07:50.764 -------------- 00:07:50.764 Delete I/O Submission Queue (00h): Supported 00:07:50.764 Create I/O Submission Queue (01h): Supported 00:07:50.764 Get Log Page (02h): Supported 00:07:50.764 Delete I/O Completion Queue (04h): Supported 00:07:50.764 Create I/O Completion Queue (05h): Supported 00:07:50.764 Identify (06h): Supported 00:07:50.764 Abort (08h): Supported 00:07:50.764 Set Features (09h): Supported 00:07:50.764 Get Features (0Ah): Supported 00:07:50.764 Asynchronous Event Request (0Ch): Supported 00:07:50.764 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.764 Directive Send (19h): Supported 00:07:50.764 Directive Receive (1Ah): Supported 00:07:50.764 Virtualization Management (1Ch): Supported 00:07:50.764 Doorbell Buffer Config (7Ch): Supported 00:07:50.764 Format NVM (80h): Supported LBA-Change 00:07:50.764 I/O Commands 00:07:50.764 ------------ 00:07:50.764 Flush (00h): Supported LBA-Change 00:07:50.764 Write (01h): Supported LBA-Change 00:07:50.764 Read (02h): Supported 00:07:50.764 Compare (05h): Supported 00:07:50.764 Write Zeroes (08h): Supported LBA-Change 00:07:50.764 Dataset Management (09h): Supported LBA-Change 00:07:50.764 Unknown (0Ch): Supported 00:07:50.764 Unknown (12h): Supported 00:07:50.764 Copy (19h): Supported LBA-Change 00:07:50.764 Unknown (1Dh): Supported LBA-Change 00:07:50.764 00:07:50.764 Error Log 00:07:50.764 ========= 00:07:50.764 00:07:50.764 Arbitration 00:07:50.764 =========== 00:07:50.764 Arbitration Burst: no limit 00:07:50.764 00:07:50.764 Power Management 00:07:50.764 ================ 00:07:50.764 Number of Power States: 1 00:07:50.764 Current Power State: Power State #0 00:07:50.764 Power State #0: 00:07:50.764 Max Power: 25.00 W 00:07:50.764 Non-Operational State: Operational 00:07:50.764 Entry Latency: 16 microseconds 00:07:50.764 Exit Latency: 4 microseconds 00:07:50.764 Relative Read Throughput: 0 00:07:50.764 Relative Read Latency: 0 00:07:50.764 Relative Write Throughput: 0 00:07:50.764 Relative Write Latency: 0 00:07:50.764 Idle Power: Not Reported 00:07:50.764 Active Power: Not Reported 00:07:50.764 Non-Operational Permissive Mode: Not Supported 00:07:50.764 00:07:50.764 Health Information 00:07:50.764 ================== 00:07:50.764 Critical Warnings: 00:07:50.764 Available Spare Space: OK 00:07:50.764 Temperature: OK 00:07:50.764 Device Reliability: OK 00:07:50.764 Read Only: No 00:07:50.764 Volatile Memory Backup: OK 00:07:50.764 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.764 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.764 Available Spare: 0% 00:07:50.764 Available Spare Threshold: 0% 00:07:50.764 Life Percentage Used: 0% 00:07:50.764 Data Units Read: 2257 00:07:50.764 Data Units Written: 2045 00:07:50.764 Host Read Commands: 115011 00:07:50.764 Host Write Commands: 113280 00:07:50.764 Controller Busy Time: 0 minutes 00:07:50.764 Power Cycles: 0 00:07:50.764 Power On Hours: 0 hours 00:07:50.764 Unsafe Shutdowns: 0 00:07:50.764 Unrecoverable Media Errors: 0 00:07:50.764 Lifetime Error Log Entries: 0 00:07:50.764 Warning Temperature Time: 0 minutes 00:07:50.764 Critical Temperature Time: 0 minutes 00:07:50.764 00:07:50.764 Number of Queues 00:07:50.764 ================ 00:07:50.764 Number of I/O Submission Queues: 64 00:07:50.764 Number of I/O Completion Queues: 64 00:07:50.764 00:07:50.764 ZNS Specific Controller Data 00:07:50.764 ============================ 00:07:50.764 Zone Append Size Limit: 0 00:07:50.764 00:07:50.764 00:07:50.764 Active Namespaces 00:07:50.764 ================= 00:07:50.764 Namespace ID:1 00:07:50.764 Error Recovery Timeout: Unlimited 00:07:50.764 Command Set Identifier: NVM (00h) 00:07:50.764 Deallocate: Supported 00:07:50.764 Deallocated/Unwritten Error: Supported 00:07:50.764 Deallocated Read Value: All 0x00 00:07:50.764 Deallocate in Write Zeroes: Not Supported 00:07:50.764 Deallocated Guard Field: 0xFFFF 00:07:50.764 Flush: Supported 00:07:50.764 Reservation: Not Supported 00:07:50.764 Namespace Sharing Capabilities: Private 00:07:50.764 Size (in LBAs): 1048576 (4GiB) 00:07:50.764 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.764 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.764 Thin Provisioning: Not Supported 00:07:50.764 Per-NS Atomic Units: No 00:07:50.764 Maximum Single Source Range Length: 128 00:07:50.764 Maximum Copy Length: 128 00:07:50.764 Maximum Source Range Count: 128 00:07:50.764 NGUID/EUI64 Never Reused: No 00:07:50.764 Namespace Write Protected: No 00:07:50.764 Number of LBA Formats: 8 00:07:50.764 Current LBA Format: LBA Format #04 00:07:50.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.764 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.764 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.764 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.764 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.764 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.764 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.764 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.764 00:07:50.764 NVM Specific Namespace Data 00:07:50.764 =========================== 00:07:50.764 Logical Block Storage Tag Mask: 0 00:07:50.764 Protection Information Capabilities: 00:07:50.764 16b Guard Protection Information Storage Tag Support: No 00:07:50.764 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.764 Storage Tag Check Read Support: No 00:07:50.764 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.764 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.764 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Namespace ID:2 00:07:50.765 Error Recovery Timeout: Unlimited 00:07:50.765 Command Set Identifier: NVM (00h) 00:07:50.765 Deallocate: Supported 00:07:50.765 Deallocated/Unwritten Error: Supported 00:07:50.765 Deallocated Read Value: All 0x00 00:07:50.765 Deallocate in Write Zeroes: Not Supported 00:07:50.765 Deallocated Guard Field: 0xFFFF 00:07:50.765 Flush: Supported 00:07:50.765 Reservation: Not Supported 00:07:50.765 Namespace Sharing Capabilities: Private 00:07:50.765 Size (in LBAs): 1048576 (4GiB) 00:07:50.765 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.765 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.765 Thin Provisioning: Not Supported 00:07:50.765 Per-NS Atomic Units: No 00:07:50.765 Maximum Single Source Range Length: 128 00:07:50.765 Maximum Copy Length: 128 00:07:50.765 Maximum Source Range Count: 128 00:07:50.765 NGUID/EUI64 Never Reused: No 00:07:50.765 Namespace Write Protected: No 00:07:50.765 Number of LBA Formats: 8 00:07:50.765 Current LBA Format: LBA Format #04 00:07:50.765 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.765 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.765 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.765 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.765 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.765 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.765 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.765 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.765 00:07:50.765 NVM Specific Namespace Data 00:07:50.765 =========================== 00:07:50.765 Logical Block Storage Tag Mask: 0 00:07:50.765 Protection Information Capabilities: 00:07:50.765 16b Guard Protection Information Storage Tag Support: No 00:07:50.765 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.765 Storage Tag Check Read Support: No 00:07:50.765 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Namespace ID:3 00:07:50.765 Error Recovery Timeout: Unlimited 00:07:50.765 Command Set Identifier: NVM (00h) 00:07:50.765 Deallocate: Supported 00:07:50.765 Deallocated/Unwritten Error: Supported 00:07:50.765 Deallocated Read Value: All 0x00 00:07:50.765 Deallocate in Write Zeroes: Not Supported 00:07:50.765 Deallocated Guard Field: 0xFFFF 00:07:50.765 Flush: Supported 00:07:50.765 Reservation: Not Supported 00:07:50.765 Namespace Sharing Capabilities: Private 00:07:50.765 Size (in LBAs): 1048576 (4GiB) 00:07:50.765 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.765 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.765 Thin Provisioning: Not Supported 00:07:50.765 Per-NS Atomic Units: No 00:07:50.765 Maximum Single Source Range Length: 128 00:07:50.765 Maximum Copy Length: 128 00:07:50.765 Maximum Source Range Count: 128 00:07:50.765 NGUID/EUI64 Never Reused: No 00:07:50.765 Namespace Write Protected: No 00:07:50.765 Number of LBA Formats: 8 00:07:50.765 Current LBA Format: LBA Format #04 00:07:50.765 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.765 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.765 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.765 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.765 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.765 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.765 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.765 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.765 00:07:50.765 NVM Specific Namespace Data 00:07:50.765 =========================== 00:07:50.765 Logical Block Storage Tag Mask: 0 00:07:50.765 Protection Information Capabilities: 00:07:50.765 16b Guard Protection Information Storage Tag Support: No 00:07:50.765 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.765 Storage Tag Check Read Support: No 00:07:50.765 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.765 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:50.765 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:51.024 ===================================================== 00:07:51.024 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.024 ===================================================== 00:07:51.024 Controller Capabilities/Features 00:07:51.024 ================================ 00:07:51.024 Vendor ID: 1b36 00:07:51.024 Subsystem Vendor ID: 1af4 00:07:51.024 Serial Number: 12340 00:07:51.024 Model Number: QEMU NVMe Ctrl 00:07:51.024 Firmware Version: 8.0.0 00:07:51.024 Recommended Arb Burst: 6 00:07:51.024 IEEE OUI Identifier: 00 54 52 00:07:51.024 Multi-path I/O 00:07:51.024 May have multiple subsystem ports: No 00:07:51.024 May have multiple controllers: No 00:07:51.024 Associated with SR-IOV VF: No 00:07:51.024 Max Data Transfer Size: 524288 00:07:51.024 Max Number of Namespaces: 256 00:07:51.024 Max Number of I/O Queues: 64 00:07:51.024 NVMe Specification Version (VS): 1.4 00:07:51.024 NVMe Specification Version (Identify): 1.4 00:07:51.024 Maximum Queue Entries: 2048 00:07:51.024 Contiguous Queues Required: Yes 00:07:51.024 Arbitration Mechanisms Supported 00:07:51.024 Weighted Round Robin: Not Supported 00:07:51.024 Vendor Specific: Not Supported 00:07:51.024 Reset Timeout: 7500 ms 00:07:51.024 Doorbell Stride: 4 bytes 00:07:51.024 NVM Subsystem Reset: Not Supported 00:07:51.024 Command Sets Supported 00:07:51.024 NVM Command Set: Supported 00:07:51.024 Boot Partition: Not Supported 00:07:51.024 Memory Page Size Minimum: 4096 bytes 00:07:51.024 Memory Page Size Maximum: 65536 bytes 00:07:51.024 Persistent Memory Region: Not Supported 00:07:51.024 Optional Asynchronous Events Supported 00:07:51.024 Namespace Attribute Notices: Supported 00:07:51.024 Firmware Activation Notices: Not Supported 00:07:51.024 ANA Change Notices: Not Supported 00:07:51.024 PLE Aggregate Log Change Notices: Not Supported 00:07:51.024 LBA Status Info Alert Notices: Not Supported 00:07:51.024 EGE Aggregate Log Change Notices: Not Supported 00:07:51.024 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.024 Zone Descriptor Change Notices: Not Supported 00:07:51.024 Discovery Log Change Notices: Not Supported 00:07:51.024 Controller Attributes 00:07:51.024 128-bit Host Identifier: Not Supported 00:07:51.024 Non-Operational Permissive Mode: Not Supported 00:07:51.024 NVM Sets: Not Supported 00:07:51.024 Read Recovery Levels: Not Supported 00:07:51.024 Endurance Groups: Not Supported 00:07:51.024 Predictable Latency Mode: Not Supported 00:07:51.024 Traffic Based Keep ALive: Not Supported 00:07:51.024 Namespace Granularity: Not Supported 00:07:51.024 SQ Associations: Not Supported 00:07:51.024 UUID List: Not Supported 00:07:51.024 Multi-Domain Subsystem: Not Supported 00:07:51.024 Fixed Capacity Management: Not Supported 00:07:51.024 Variable Capacity Management: Not Supported 00:07:51.024 Delete Endurance Group: Not Supported 00:07:51.024 Delete NVM Set: Not Supported 00:07:51.024 Extended LBA Formats Supported: Supported 00:07:51.024 Flexible Data Placement Supported: Not Supported 00:07:51.024 00:07:51.024 Controller Memory Buffer Support 00:07:51.024 ================================ 00:07:51.024 Supported: No 00:07:51.024 00:07:51.024 Persistent Memory Region Support 00:07:51.024 ================================ 00:07:51.024 Supported: No 00:07:51.024 00:07:51.024 Admin Command Set Attributes 00:07:51.024 ============================ 00:07:51.024 Security Send/Receive: Not Supported 00:07:51.024 Format NVM: Supported 00:07:51.024 Firmware Activate/Download: Not Supported 00:07:51.024 Namespace Management: Supported 00:07:51.024 Device Self-Test: Not Supported 00:07:51.024 Directives: Supported 00:07:51.024 NVMe-MI: Not Supported 00:07:51.024 Virtualization Management: Not Supported 00:07:51.024 Doorbell Buffer Config: Supported 00:07:51.024 Get LBA Status Capability: Not Supported 00:07:51.024 Command & Feature Lockdown Capability: Not Supported 00:07:51.024 Abort Command Limit: 4 00:07:51.024 Async Event Request Limit: 4 00:07:51.024 Number of Firmware Slots: N/A 00:07:51.024 Firmware Slot 1 Read-Only: N/A 00:07:51.024 Firmware Activation Without Reset: N/A 00:07:51.024 Multiple Update Detection Support: N/A 00:07:51.024 Firmware Update Granularity: No Information Provided 00:07:51.024 Per-Namespace SMART Log: Yes 00:07:51.024 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.024 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:51.024 Command Effects Log Page: Supported 00:07:51.024 Get Log Page Extended Data: Supported 00:07:51.024 Telemetry Log Pages: Not Supported 00:07:51.024 Persistent Event Log Pages: Not Supported 00:07:51.024 Supported Log Pages Log Page: May Support 00:07:51.024 Commands Supported & Effects Log Page: Not Supported 00:07:51.024 Feature Identifiers & Effects Log Page:May Support 00:07:51.024 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.024 Data Area 4 for Telemetry Log: Not Supported 00:07:51.024 Error Log Page Entries Supported: 1 00:07:51.024 Keep Alive: Not Supported 00:07:51.024 00:07:51.024 NVM Command Set Attributes 00:07:51.024 ========================== 00:07:51.024 Submission Queue Entry Size 00:07:51.024 Max: 64 00:07:51.024 Min: 64 00:07:51.024 Completion Queue Entry Size 00:07:51.024 Max: 16 00:07:51.024 Min: 16 00:07:51.024 Number of Namespaces: 256 00:07:51.024 Compare Command: Supported 00:07:51.024 Write Uncorrectable Command: Not Supported 00:07:51.024 Dataset Management Command: Supported 00:07:51.024 Write Zeroes Command: Supported 00:07:51.024 Set Features Save Field: Supported 00:07:51.024 Reservations: Not Supported 00:07:51.024 Timestamp: Supported 00:07:51.024 Copy: Supported 00:07:51.024 Volatile Write Cache: Present 00:07:51.024 Atomic Write Unit (Normal): 1 00:07:51.025 Atomic Write Unit (PFail): 1 00:07:51.025 Atomic Compare & Write Unit: 1 00:07:51.025 Fused Compare & Write: Not Supported 00:07:51.025 Scatter-Gather List 00:07:51.025 SGL Command Set: Supported 00:07:51.025 SGL Keyed: Not Supported 00:07:51.025 SGL Bit Bucket Descriptor: Not Supported 00:07:51.025 SGL Metadata Pointer: Not Supported 00:07:51.025 Oversized SGL: Not Supported 00:07:51.025 SGL Metadata Address: Not Supported 00:07:51.025 SGL Offset: Not Supported 00:07:51.025 Transport SGL Data Block: Not Supported 00:07:51.025 Replay Protected Memory Block: Not Supported 00:07:51.025 00:07:51.025 Firmware Slot Information 00:07:51.025 ========================= 00:07:51.025 Active slot: 1 00:07:51.025 Slot 1 Firmware Revision: 1.0 00:07:51.025 00:07:51.025 00:07:51.025 Commands Supported and Effects 00:07:51.025 ============================== 00:07:51.025 Admin Commands 00:07:51.025 -------------- 00:07:51.025 Delete I/O Submission Queue (00h): Supported 00:07:51.025 Create I/O Submission Queue (01h): Supported 00:07:51.025 Get Log Page (02h): Supported 00:07:51.025 Delete I/O Completion Queue (04h): Supported 00:07:51.025 Create I/O Completion Queue (05h): Supported 00:07:51.025 Identify (06h): Supported 00:07:51.025 Abort (08h): Supported 00:07:51.025 Set Features (09h): Supported 00:07:51.025 Get Features (0Ah): Supported 00:07:51.025 Asynchronous Event Request (0Ch): Supported 00:07:51.025 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.025 Directive Send (19h): Supported 00:07:51.025 Directive Receive (1Ah): Supported 00:07:51.025 Virtualization Management (1Ch): Supported 00:07:51.025 Doorbell Buffer Config (7Ch): Supported 00:07:51.025 Format NVM (80h): Supported LBA-Change 00:07:51.025 I/O Commands 00:07:51.025 ------------ 00:07:51.025 Flush (00h): Supported LBA-Change 00:07:51.025 Write (01h): Supported LBA-Change 00:07:51.025 Read (02h): Supported 00:07:51.025 Compare (05h): Supported 00:07:51.025 Write Zeroes (08h): Supported LBA-Change 00:07:51.025 Dataset Management (09h): Supported LBA-Change 00:07:51.025 Unknown (0Ch): Supported 00:07:51.025 Unknown (12h): Supported 00:07:51.025 Copy (19h): Supported LBA-Change 00:07:51.025 Unknown (1Dh): Supported LBA-Change 00:07:51.025 00:07:51.025 Error Log 00:07:51.025 ========= 00:07:51.025 00:07:51.025 Arbitration 00:07:51.025 =========== 00:07:51.025 Arbitration Burst: no limit 00:07:51.025 00:07:51.025 Power Management 00:07:51.025 ================ 00:07:51.025 Number of Power States: 1 00:07:51.025 Current Power State: Power State #0 00:07:51.025 Power State #0: 00:07:51.025 Max Power: 25.00 W 00:07:51.025 Non-Operational State: Operational 00:07:51.025 Entry Latency: 16 microseconds 00:07:51.025 Exit Latency: 4 microseconds 00:07:51.025 Relative Read Throughput: 0 00:07:51.025 Relative Read Latency: 0 00:07:51.025 Relative Write Throughput: 0 00:07:51.025 Relative Write Latency: 0 00:07:51.025 Idle Power: Not Reported 00:07:51.025 Active Power: Not Reported 00:07:51.025 Non-Operational Permissive Mode: Not Supported 00:07:51.025 00:07:51.025 Health Information 00:07:51.025 ================== 00:07:51.025 Critical Warnings: 00:07:51.025 Available Spare Space: OK 00:07:51.025 Temperature: OK 00:07:51.025 Device Reliability: OK 00:07:51.025 Read Only: No 00:07:51.025 Volatile Memory Backup: OK 00:07:51.025 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.025 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.025 Available Spare: 0% 00:07:51.025 Available Spare Threshold: 0% 00:07:51.025 Life Percentage Used: 0% 00:07:51.025 Data Units Read: 686 00:07:51.025 Data Units Written: 615 00:07:51.025 Host Read Commands: 37482 00:07:51.025 Host Write Commands: 37268 00:07:51.025 Controller Busy Time: 0 minutes 00:07:51.025 Power Cycles: 0 00:07:51.025 Power On Hours: 0 hours 00:07:51.025 Unsafe Shutdowns: 0 00:07:51.025 Unrecoverable Media Errors: 0 00:07:51.025 Lifetime Error Log Entries: 0 00:07:51.025 Warning Temperature Time: 0 minutes 00:07:51.025 Critical Temperature Time: 0 minutes 00:07:51.025 00:07:51.025 Number of Queues 00:07:51.025 ================ 00:07:51.025 Number of I/O Submission Queues: 64 00:07:51.025 Number of I/O Completion Queues: 64 00:07:51.025 00:07:51.025 ZNS Specific Controller Data 00:07:51.025 ============================ 00:07:51.025 Zone Append Size Limit: 0 00:07:51.025 00:07:51.025 00:07:51.025 Active Namespaces 00:07:51.025 ================= 00:07:51.025 Namespace ID:1 00:07:51.025 Error Recovery Timeout: Unlimited 00:07:51.025 Command Set Identifier: NVM (00h) 00:07:51.025 Deallocate: Supported 00:07:51.025 Deallocated/Unwritten Error: Supported 00:07:51.025 Deallocated Read Value: All 0x00 00:07:51.025 Deallocate in Write Zeroes: Not Supported 00:07:51.025 Deallocated Guard Field: 0xFFFF 00:07:51.025 Flush: Supported 00:07:51.025 Reservation: Not Supported 00:07:51.025 Metadata Transferred as: Separate Metadata Buffer 00:07:51.025 Namespace Sharing Capabilities: Private 00:07:51.025 Size (in LBAs): 1548666 (5GiB) 00:07:51.025 Capacity (in LBAs): 1548666 (5GiB) 00:07:51.025 Utilization (in LBAs): 1548666 (5GiB) 00:07:51.025 Thin Provisioning: Not Supported 00:07:51.025 Per-NS Atomic Units: No 00:07:51.025 Maximum Single Source Range Length: 128 00:07:51.025 Maximum Copy Length: 128 00:07:51.025 Maximum Source Range Count: 128 00:07:51.025 NGUID/EUI64 Never Reused: No 00:07:51.025 Namespace Write Protected: No 00:07:51.025 Number of LBA Formats: 8 00:07:51.025 Current LBA Format: LBA Format #07 00:07:51.025 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.025 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.025 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.025 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.025 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.025 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.025 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.025 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.025 00:07:51.025 NVM Specific Namespace Data 00:07:51.025 =========================== 00:07:51.025 Logical Block Storage Tag Mask: 0 00:07:51.025 Protection Information Capabilities: 00:07:51.025 16b Guard Protection Information Storage Tag Support: No 00:07:51.025 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.025 Storage Tag Check Read Support: No 00:07:51.025 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.025 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.025 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.025 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.025 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.025 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.025 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.025 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.025 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.025 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:51.285 ===================================================== 00:07:51.285 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.285 ===================================================== 00:07:51.285 Controller Capabilities/Features 00:07:51.285 ================================ 00:07:51.285 Vendor ID: 1b36 00:07:51.285 Subsystem Vendor ID: 1af4 00:07:51.285 Serial Number: 12341 00:07:51.285 Model Number: QEMU NVMe Ctrl 00:07:51.285 Firmware Version: 8.0.0 00:07:51.285 Recommended Arb Burst: 6 00:07:51.285 IEEE OUI Identifier: 00 54 52 00:07:51.285 Multi-path I/O 00:07:51.285 May have multiple subsystem ports: No 00:07:51.285 May have multiple controllers: No 00:07:51.285 Associated with SR-IOV VF: No 00:07:51.285 Max Data Transfer Size: 524288 00:07:51.285 Max Number of Namespaces: 256 00:07:51.285 Max Number of I/O Queues: 64 00:07:51.285 NVMe Specification Version (VS): 1.4 00:07:51.285 NVMe Specification Version (Identify): 1.4 00:07:51.285 Maximum Queue Entries: 2048 00:07:51.285 Contiguous Queues Required: Yes 00:07:51.285 Arbitration Mechanisms Supported 00:07:51.285 Weighted Round Robin: Not Supported 00:07:51.285 Vendor Specific: Not Supported 00:07:51.285 Reset Timeout: 7500 ms 00:07:51.285 Doorbell Stride: 4 bytes 00:07:51.285 NVM Subsystem Reset: Not Supported 00:07:51.285 Command Sets Supported 00:07:51.285 NVM Command Set: Supported 00:07:51.285 Boot Partition: Not Supported 00:07:51.285 Memory Page Size Minimum: 4096 bytes 00:07:51.285 Memory Page Size Maximum: 65536 bytes 00:07:51.285 Persistent Memory Region: Not Supported 00:07:51.285 Optional Asynchronous Events Supported 00:07:51.285 Namespace Attribute Notices: Supported 00:07:51.285 Firmware Activation Notices: Not Supported 00:07:51.285 ANA Change Notices: Not Supported 00:07:51.285 PLE Aggregate Log Change Notices: Not Supported 00:07:51.285 LBA Status Info Alert Notices: Not Supported 00:07:51.285 EGE Aggregate Log Change Notices: Not Supported 00:07:51.285 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.285 Zone Descriptor Change Notices: Not Supported 00:07:51.285 Discovery Log Change Notices: Not Supported 00:07:51.285 Controller Attributes 00:07:51.285 128-bit Host Identifier: Not Supported 00:07:51.285 Non-Operational Permissive Mode: Not Supported 00:07:51.285 NVM Sets: Not Supported 00:07:51.285 Read Recovery Levels: Not Supported 00:07:51.285 Endurance Groups: Not Supported 00:07:51.285 Predictable Latency Mode: Not Supported 00:07:51.285 Traffic Based Keep ALive: Not Supported 00:07:51.285 Namespace Granularity: Not Supported 00:07:51.285 SQ Associations: Not Supported 00:07:51.285 UUID List: Not Supported 00:07:51.285 Multi-Domain Subsystem: Not Supported 00:07:51.285 Fixed Capacity Management: Not Supported 00:07:51.285 Variable Capacity Management: Not Supported 00:07:51.285 Delete Endurance Group: Not Supported 00:07:51.285 Delete NVM Set: Not Supported 00:07:51.285 Extended LBA Formats Supported: Supported 00:07:51.285 Flexible Data Placement Supported: Not Supported 00:07:51.285 00:07:51.285 Controller Memory Buffer Support 00:07:51.285 ================================ 00:07:51.286 Supported: No 00:07:51.286 00:07:51.286 Persistent Memory Region Support 00:07:51.286 ================================ 00:07:51.286 Supported: No 00:07:51.286 00:07:51.286 Admin Command Set Attributes 00:07:51.286 ============================ 00:07:51.286 Security Send/Receive: Not Supported 00:07:51.286 Format NVM: Supported 00:07:51.286 Firmware Activate/Download: Not Supported 00:07:51.286 Namespace Management: Supported 00:07:51.286 Device Self-Test: Not Supported 00:07:51.286 Directives: Supported 00:07:51.286 NVMe-MI: Not Supported 00:07:51.286 Virtualization Management: Not Supported 00:07:51.286 Doorbell Buffer Config: Supported 00:07:51.286 Get LBA Status Capability: Not Supported 00:07:51.286 Command & Feature Lockdown Capability: Not Supported 00:07:51.286 Abort Command Limit: 4 00:07:51.286 Async Event Request Limit: 4 00:07:51.286 Number of Firmware Slots: N/A 00:07:51.286 Firmware Slot 1 Read-Only: N/A 00:07:51.286 Firmware Activation Without Reset: N/A 00:07:51.286 Multiple Update Detection Support: N/A 00:07:51.286 Firmware Update Granularity: No Information Provided 00:07:51.286 Per-Namespace SMART Log: Yes 00:07:51.286 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.286 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:51.286 Command Effects Log Page: Supported 00:07:51.286 Get Log Page Extended Data: Supported 00:07:51.286 Telemetry Log Pages: Not Supported 00:07:51.286 Persistent Event Log Pages: Not Supported 00:07:51.286 Supported Log Pages Log Page: May Support 00:07:51.286 Commands Supported & Effects Log Page: Not Supported 00:07:51.286 Feature Identifiers & Effects Log Page:May Support 00:07:51.286 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.286 Data Area 4 for Telemetry Log: Not Supported 00:07:51.286 Error Log Page Entries Supported: 1 00:07:51.286 Keep Alive: Not Supported 00:07:51.286 00:07:51.286 NVM Command Set Attributes 00:07:51.286 ========================== 00:07:51.286 Submission Queue Entry Size 00:07:51.286 Max: 64 00:07:51.286 Min: 64 00:07:51.286 Completion Queue Entry Size 00:07:51.286 Max: 16 00:07:51.286 Min: 16 00:07:51.286 Number of Namespaces: 256 00:07:51.286 Compare Command: Supported 00:07:51.286 Write Uncorrectable Command: Not Supported 00:07:51.286 Dataset Management Command: Supported 00:07:51.286 Write Zeroes Command: Supported 00:07:51.286 Set Features Save Field: Supported 00:07:51.286 Reservations: Not Supported 00:07:51.286 Timestamp: Supported 00:07:51.286 Copy: Supported 00:07:51.286 Volatile Write Cache: Present 00:07:51.286 Atomic Write Unit (Normal): 1 00:07:51.286 Atomic Write Unit (PFail): 1 00:07:51.286 Atomic Compare & Write Unit: 1 00:07:51.286 Fused Compare & Write: Not Supported 00:07:51.286 Scatter-Gather List 00:07:51.286 SGL Command Set: Supported 00:07:51.286 SGL Keyed: Not Supported 00:07:51.286 SGL Bit Bucket Descriptor: Not Supported 00:07:51.286 SGL Metadata Pointer: Not Supported 00:07:51.286 Oversized SGL: Not Supported 00:07:51.286 SGL Metadata Address: Not Supported 00:07:51.286 SGL Offset: Not Supported 00:07:51.286 Transport SGL Data Block: Not Supported 00:07:51.286 Replay Protected Memory Block: Not Supported 00:07:51.286 00:07:51.286 Firmware Slot Information 00:07:51.286 ========================= 00:07:51.286 Active slot: 1 00:07:51.286 Slot 1 Firmware Revision: 1.0 00:07:51.286 00:07:51.286 00:07:51.286 Commands Supported and Effects 00:07:51.286 ============================== 00:07:51.286 Admin Commands 00:07:51.286 -------------- 00:07:51.286 Delete I/O Submission Queue (00h): Supported 00:07:51.286 Create I/O Submission Queue (01h): Supported 00:07:51.286 Get Log Page (02h): Supported 00:07:51.286 Delete I/O Completion Queue (04h): Supported 00:07:51.286 Create I/O Completion Queue (05h): Supported 00:07:51.286 Identify (06h): Supported 00:07:51.286 Abort (08h): Supported 00:07:51.286 Set Features (09h): Supported 00:07:51.286 Get Features (0Ah): Supported 00:07:51.286 Asynchronous Event Request (0Ch): Supported 00:07:51.286 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.286 Directive Send (19h): Supported 00:07:51.286 Directive Receive (1Ah): Supported 00:07:51.286 Virtualization Management (1Ch): Supported 00:07:51.286 Doorbell Buffer Config (7Ch): Supported 00:07:51.286 Format NVM (80h): Supported LBA-Change 00:07:51.286 I/O Commands 00:07:51.286 ------------ 00:07:51.286 Flush (00h): Supported LBA-Change 00:07:51.286 Write (01h): Supported LBA-Change 00:07:51.286 Read (02h): Supported 00:07:51.286 Compare (05h): Supported 00:07:51.286 Write Zeroes (08h): Supported LBA-Change 00:07:51.286 Dataset Management (09h): Supported LBA-Change 00:07:51.286 Unknown (0Ch): Supported 00:07:51.286 Unknown (12h): Supported 00:07:51.286 Copy (19h): Supported LBA-Change 00:07:51.286 Unknown (1Dh): Supported LBA-Change 00:07:51.286 00:07:51.286 Error Log 00:07:51.286 ========= 00:07:51.286 00:07:51.286 Arbitration 00:07:51.286 =========== 00:07:51.286 Arbitration Burst: no limit 00:07:51.286 00:07:51.286 Power Management 00:07:51.286 ================ 00:07:51.286 Number of Power States: 1 00:07:51.286 Current Power State: Power State #0 00:07:51.286 Power State #0: 00:07:51.286 Max Power: 25.00 W 00:07:51.286 Non-Operational State: Operational 00:07:51.286 Entry Latency: 16 microseconds 00:07:51.286 Exit Latency: 4 microseconds 00:07:51.286 Relative Read Throughput: 0 00:07:51.286 Relative Read Latency: 0 00:07:51.286 Relative Write Throughput: 0 00:07:51.286 Relative Write Latency: 0 00:07:51.286 Idle Power: Not Reported 00:07:51.286 Active Power: Not Reported 00:07:51.286 Non-Operational Permissive Mode: Not Supported 00:07:51.286 00:07:51.286 Health Information 00:07:51.286 ================== 00:07:51.286 Critical Warnings: 00:07:51.286 Available Spare Space: OK 00:07:51.286 Temperature: OK 00:07:51.286 Device Reliability: OK 00:07:51.286 Read Only: No 00:07:51.286 Volatile Memory Backup: OK 00:07:51.286 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.286 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.286 Available Spare: 0% 00:07:51.286 Available Spare Threshold: 0% 00:07:51.286 Life Percentage Used: 0% 00:07:51.286 Data Units Read: 1080 00:07:51.286 Data Units Written: 946 00:07:51.286 Host Read Commands: 55353 00:07:51.286 Host Write Commands: 54140 00:07:51.286 Controller Busy Time: 0 minutes 00:07:51.286 Power Cycles: 0 00:07:51.286 Power On Hours: 0 hours 00:07:51.286 Unsafe Shutdowns: 0 00:07:51.286 Unrecoverable Media Errors: 0 00:07:51.286 Lifetime Error Log Entries: 0 00:07:51.286 Warning Temperature Time: 0 minutes 00:07:51.286 Critical Temperature Time: 0 minutes 00:07:51.286 00:07:51.286 Number of Queues 00:07:51.286 ================ 00:07:51.286 Number of I/O Submission Queues: 64 00:07:51.286 Number of I/O Completion Queues: 64 00:07:51.286 00:07:51.286 ZNS Specific Controller Data 00:07:51.286 ============================ 00:07:51.286 Zone Append Size Limit: 0 00:07:51.286 00:07:51.286 00:07:51.286 Active Namespaces 00:07:51.286 ================= 00:07:51.286 Namespace ID:1 00:07:51.286 Error Recovery Timeout: Unlimited 00:07:51.286 Command Set Identifier: NVM (00h) 00:07:51.286 Deallocate: Supported 00:07:51.286 Deallocated/Unwritten Error: Supported 00:07:51.286 Deallocated Read Value: All 0x00 00:07:51.286 Deallocate in Write Zeroes: Not Supported 00:07:51.286 Deallocated Guard Field: 0xFFFF 00:07:51.286 Flush: Supported 00:07:51.286 Reservation: Not Supported 00:07:51.286 Namespace Sharing Capabilities: Private 00:07:51.286 Size (in LBAs): 1310720 (5GiB) 00:07:51.286 Capacity (in LBAs): 1310720 (5GiB) 00:07:51.287 Utilization (in LBAs): 1310720 (5GiB) 00:07:51.287 Thin Provisioning: Not Supported 00:07:51.287 Per-NS Atomic Units: No 00:07:51.287 Maximum Single Source Range Length: 128 00:07:51.287 Maximum Copy Length: 128 00:07:51.287 Maximum Source Range Count: 128 00:07:51.287 NGUID/EUI64 Never Reused: No 00:07:51.287 Namespace Write Protected: No 00:07:51.287 Number of LBA Formats: 8 00:07:51.287 Current LBA Format: LBA Format #04 00:07:51.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.287 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.287 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.287 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.287 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.287 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.287 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.287 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.287 00:07:51.287 NVM Specific Namespace Data 00:07:51.287 =========================== 00:07:51.287 Logical Block Storage Tag Mask: 0 00:07:51.287 Protection Information Capabilities: 00:07:51.287 16b Guard Protection Information Storage Tag Support: No 00:07:51.287 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.287 Storage Tag Check Read Support: No 00:07:51.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.287 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.287 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.287 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.287 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.287 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:51.287 ===================================================== 00:07:51.287 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.287 ===================================================== 00:07:51.287 Controller Capabilities/Features 00:07:51.287 ================================ 00:07:51.287 Vendor ID: 1b36 00:07:51.287 Subsystem Vendor ID: 1af4 00:07:51.287 Serial Number: 12342 00:07:51.287 Model Number: QEMU NVMe Ctrl 00:07:51.287 Firmware Version: 8.0.0 00:07:51.287 Recommended Arb Burst: 6 00:07:51.287 IEEE OUI Identifier: 00 54 52 00:07:51.287 Multi-path I/O 00:07:51.287 May have multiple subsystem ports: No 00:07:51.287 May have multiple controllers: No 00:07:51.287 Associated with SR-IOV VF: No 00:07:51.287 Max Data Transfer Size: 524288 00:07:51.287 Max Number of Namespaces: 256 00:07:51.287 Max Number of I/O Queues: 64 00:07:51.287 NVMe Specification Version (VS): 1.4 00:07:51.287 NVMe Specification Version (Identify): 1.4 00:07:51.287 Maximum Queue Entries: 2048 00:07:51.287 Contiguous Queues Required: Yes 00:07:51.287 Arbitration Mechanisms Supported 00:07:51.287 Weighted Round Robin: Not Supported 00:07:51.287 Vendor Specific: Not Supported 00:07:51.287 Reset Timeout: 7500 ms 00:07:51.287 Doorbell Stride: 4 bytes 00:07:51.287 NVM Subsystem Reset: Not Supported 00:07:51.287 Command Sets Supported 00:07:51.287 NVM Command Set: Supported 00:07:51.287 Boot Partition: Not Supported 00:07:51.287 Memory Page Size Minimum: 4096 bytes 00:07:51.287 Memory Page Size Maximum: 65536 bytes 00:07:51.287 Persistent Memory Region: Not Supported 00:07:51.287 Optional Asynchronous Events Supported 00:07:51.287 Namespace Attribute Notices: Supported 00:07:51.287 Firmware Activation Notices: Not Supported 00:07:51.287 ANA Change Notices: Not Supported 00:07:51.287 PLE Aggregate Log Change Notices: Not Supported 00:07:51.287 LBA Status Info Alert Notices: Not Supported 00:07:51.287 EGE Aggregate Log Change Notices: Not Supported 00:07:51.287 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.287 Zone Descriptor Change Notices: Not Supported 00:07:51.287 Discovery Log Change Notices: Not Supported 00:07:51.287 Controller Attributes 00:07:51.287 128-bit Host Identifier: Not Supported 00:07:51.287 Non-Operational Permissive Mode: Not Supported 00:07:51.287 NVM Sets: Not Supported 00:07:51.287 Read Recovery Levels: Not Supported 00:07:51.287 Endurance Groups: Not Supported 00:07:51.287 Predictable Latency Mode: Not Supported 00:07:51.287 Traffic Based Keep ALive: Not Supported 00:07:51.287 Namespace Granularity: Not Supported 00:07:51.287 SQ Associations: Not Supported 00:07:51.287 UUID List: Not Supported 00:07:51.287 Multi-Domain Subsystem: Not Supported 00:07:51.287 Fixed Capacity Management: Not Supported 00:07:51.287 Variable Capacity Management: Not Supported 00:07:51.287 Delete Endurance Group: Not Supported 00:07:51.287 Delete NVM Set: Not Supported 00:07:51.287 Extended LBA Formats Supported: Supported 00:07:51.287 Flexible Data Placement Supported: Not Supported 00:07:51.287 00:07:51.287 Controller Memory Buffer Support 00:07:51.287 ================================ 00:07:51.287 Supported: No 00:07:51.287 00:07:51.287 Persistent Memory Region Support 00:07:51.287 ================================ 00:07:51.287 Supported: No 00:07:51.287 00:07:51.287 Admin Command Set Attributes 00:07:51.287 ============================ 00:07:51.287 Security Send/Receive: Not Supported 00:07:51.287 Format NVM: Supported 00:07:51.287 Firmware Activate/Download: Not Supported 00:07:51.287 Namespace Management: Supported 00:07:51.287 Device Self-Test: Not Supported 00:07:51.287 Directives: Supported 00:07:51.287 NVMe-MI: Not Supported 00:07:51.287 Virtualization Management: Not Supported 00:07:51.287 Doorbell Buffer Config: Supported 00:07:51.287 Get LBA Status Capability: Not Supported 00:07:51.287 Command & Feature Lockdown Capability: Not Supported 00:07:51.287 Abort Command Limit: 4 00:07:51.287 Async Event Request Limit: 4 00:07:51.287 Number of Firmware Slots: N/A 00:07:51.287 Firmware Slot 1 Read-Only: N/A 00:07:51.287 Firmware Activation Without Reset: N/A 00:07:51.287 Multiple Update Detection Support: N/A 00:07:51.287 Firmware Update Granularity: No Information Provided 00:07:51.287 Per-Namespace SMART Log: Yes 00:07:51.287 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.287 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:51.287 Command Effects Log Page: Supported 00:07:51.287 Get Log Page Extended Data: Supported 00:07:51.287 Telemetry Log Pages: Not Supported 00:07:51.287 Persistent Event Log Pages: Not Supported 00:07:51.287 Supported Log Pages Log Page: May Support 00:07:51.287 Commands Supported & Effects Log Page: Not Supported 00:07:51.287 Feature Identifiers & Effects Log Page:May Support 00:07:51.287 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.287 Data Area 4 for Telemetry Log: Not Supported 00:07:51.287 Error Log Page Entries Supported: 1 00:07:51.287 Keep Alive: Not Supported 00:07:51.287 00:07:51.287 NVM Command Set Attributes 00:07:51.287 ========================== 00:07:51.287 Submission Queue Entry Size 00:07:51.287 Max: 64 00:07:51.287 Min: 64 00:07:51.287 Completion Queue Entry Size 00:07:51.287 Max: 16 00:07:51.287 Min: 16 00:07:51.287 Number of Namespaces: 256 00:07:51.287 Compare Command: Supported 00:07:51.287 Write Uncorrectable Command: Not Supported 00:07:51.287 Dataset Management Command: Supported 00:07:51.287 Write Zeroes Command: Supported 00:07:51.287 Set Features Save Field: Supported 00:07:51.287 Reservations: Not Supported 00:07:51.287 Timestamp: Supported 00:07:51.287 Copy: Supported 00:07:51.287 Volatile Write Cache: Present 00:07:51.287 Atomic Write Unit (Normal): 1 00:07:51.287 Atomic Write Unit (PFail): 1 00:07:51.288 Atomic Compare & Write Unit: 1 00:07:51.288 Fused Compare & Write: Not Supported 00:07:51.288 Scatter-Gather List 00:07:51.288 SGL Command Set: Supported 00:07:51.288 SGL Keyed: Not Supported 00:07:51.288 SGL Bit Bucket Descriptor: Not Supported 00:07:51.288 SGL Metadata Pointer: Not Supported 00:07:51.288 Oversized SGL: Not Supported 00:07:51.288 SGL Metadata Address: Not Supported 00:07:51.288 SGL Offset: Not Supported 00:07:51.288 Transport SGL Data Block: Not Supported 00:07:51.288 Replay Protected Memory Block: Not Supported 00:07:51.288 00:07:51.288 Firmware Slot Information 00:07:51.288 ========================= 00:07:51.288 Active slot: 1 00:07:51.288 Slot 1 Firmware Revision: 1.0 00:07:51.288 00:07:51.288 00:07:51.288 Commands Supported and Effects 00:07:51.288 ============================== 00:07:51.288 Admin Commands 00:07:51.288 -------------- 00:07:51.288 Delete I/O Submission Queue (00h): Supported 00:07:51.288 Create I/O Submission Queue (01h): Supported 00:07:51.288 Get Log Page (02h): Supported 00:07:51.288 Delete I/O Completion Queue (04h): Supported 00:07:51.288 Create I/O Completion Queue (05h): Supported 00:07:51.288 Identify (06h): Supported 00:07:51.288 Abort (08h): Supported 00:07:51.288 Set Features (09h): Supported 00:07:51.288 Get Features (0Ah): Supported 00:07:51.288 Asynchronous Event Request (0Ch): Supported 00:07:51.288 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.288 Directive Send (19h): Supported 00:07:51.288 Directive Receive (1Ah): Supported 00:07:51.288 Virtualization Management (1Ch): Supported 00:07:51.288 Doorbell Buffer Config (7Ch): Supported 00:07:51.288 Format NVM (80h): Supported LBA-Change 00:07:51.288 I/O Commands 00:07:51.288 ------------ 00:07:51.288 Flush (00h): Supported LBA-Change 00:07:51.288 Write (01h): Supported LBA-Change 00:07:51.288 Read (02h): Supported 00:07:51.288 Compare (05h): Supported 00:07:51.288 Write Zeroes (08h): Supported LBA-Change 00:07:51.288 Dataset Management (09h): Supported LBA-Change 00:07:51.288 Unknown (0Ch): Supported 00:07:51.288 Unknown (12h): Supported 00:07:51.288 Copy (19h): Supported LBA-Change 00:07:51.288 Unknown (1Dh): Supported LBA-Change 00:07:51.288 00:07:51.288 Error Log 00:07:51.288 ========= 00:07:51.288 00:07:51.288 Arbitration 00:07:51.288 =========== 00:07:51.288 Arbitration Burst: no limit 00:07:51.288 00:07:51.288 Power Management 00:07:51.288 ================ 00:07:51.288 Number of Power States: 1 00:07:51.288 Current Power State: Power State #0 00:07:51.288 Power State #0: 00:07:51.288 Max Power: 25.00 W 00:07:51.288 Non-Operational State: Operational 00:07:51.288 Entry Latency: 16 microseconds 00:07:51.288 Exit Latency: 4 microseconds 00:07:51.288 Relative Read Throughput: 0 00:07:51.288 Relative Read Latency: 0 00:07:51.288 Relative Write Throughput: 0 00:07:51.288 Relative Write Latency: 0 00:07:51.288 Idle Power: Not Reported 00:07:51.288 Active Power: Not Reported 00:07:51.288 Non-Operational Permissive Mode: Not Supported 00:07:51.288 00:07:51.288 Health Information 00:07:51.288 ================== 00:07:51.288 Critical Warnings: 00:07:51.288 Available Spare Space: OK 00:07:51.288 Temperature: OK 00:07:51.288 Device Reliability: OK 00:07:51.288 Read Only: No 00:07:51.288 Volatile Memory Backup: OK 00:07:51.288 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.288 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.288 Available Spare: 0% 00:07:51.288 Available Spare Threshold: 0% 00:07:51.288 Life Percentage Used: 0% 00:07:51.288 Data Units Read: 2257 00:07:51.288 Data Units Written: 2045 00:07:51.288 Host Read Commands: 115011 00:07:51.288 Host Write Commands: 113280 00:07:51.288 Controller Busy Time: 0 minutes 00:07:51.288 Power Cycles: 0 00:07:51.288 Power On Hours: 0 hours 00:07:51.288 Unsafe Shutdowns: 0 00:07:51.288 Unrecoverable Media Errors: 0 00:07:51.288 Lifetime Error Log Entries: 0 00:07:51.288 Warning Temperature Time: 0 minutes 00:07:51.288 Critical Temperature Time: 0 minutes 00:07:51.288 00:07:51.288 Number of Queues 00:07:51.288 ================ 00:07:51.288 Number of I/O Submission Queues: 64 00:07:51.288 Number of I/O Completion Queues: 64 00:07:51.288 00:07:51.288 ZNS Specific Controller Data 00:07:51.288 ============================ 00:07:51.288 Zone Append Size Limit: 0 00:07:51.288 00:07:51.288 00:07:51.288 Active Namespaces 00:07:51.288 ================= 00:07:51.288 Namespace ID:1 00:07:51.288 Error Recovery Timeout: Unlimited 00:07:51.288 Command Set Identifier: NVM (00h) 00:07:51.288 Deallocate: Supported 00:07:51.288 Deallocated/Unwritten Error: Supported 00:07:51.288 Deallocated Read Value: All 0x00 00:07:51.288 Deallocate in Write Zeroes: Not Supported 00:07:51.288 Deallocated Guard Field: 0xFFFF 00:07:51.288 Flush: Supported 00:07:51.288 Reservation: Not Supported 00:07:51.288 Namespace Sharing Capabilities: Private 00:07:51.288 Size (in LBAs): 1048576 (4GiB) 00:07:51.288 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.288 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.288 Thin Provisioning: Not Supported 00:07:51.288 Per-NS Atomic Units: No 00:07:51.288 Maximum Single Source Range Length: 128 00:07:51.288 Maximum Copy Length: 128 00:07:51.288 Maximum Source Range Count: 128 00:07:51.288 NGUID/EUI64 Never Reused: No 00:07:51.288 Namespace Write Protected: No 00:07:51.288 Number of LBA Formats: 8 00:07:51.288 Current LBA Format: LBA Format #04 00:07:51.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.288 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.288 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.288 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.288 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.288 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.288 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.288 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.288 00:07:51.288 NVM Specific Namespace Data 00:07:51.288 =========================== 00:07:51.288 Logical Block Storage Tag Mask: 0 00:07:51.288 Protection Information Capabilities: 00:07:51.288 16b Guard Protection Information Storage Tag Support: No 00:07:51.288 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.288 Storage Tag Check Read Support: No 00:07:51.288 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.288 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.288 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.288 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.288 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.288 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.288 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.288 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.288 Namespace ID:2 00:07:51.288 Error Recovery Timeout: Unlimited 00:07:51.288 Command Set Identifier: NVM (00h) 00:07:51.288 Deallocate: Supported 00:07:51.288 Deallocated/Unwritten Error: Supported 00:07:51.288 Deallocated Read Value: All 0x00 00:07:51.288 Deallocate in Write Zeroes: Not Supported 00:07:51.288 Deallocated Guard Field: 0xFFFF 00:07:51.288 Flush: Supported 00:07:51.288 Reservation: Not Supported 00:07:51.288 Namespace Sharing Capabilities: Private 00:07:51.288 Size (in LBAs): 1048576 (4GiB) 00:07:51.288 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.288 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.288 Thin Provisioning: Not Supported 00:07:51.288 Per-NS Atomic Units: No 00:07:51.288 Maximum Single Source Range Length: 128 00:07:51.288 Maximum Copy Length: 128 00:07:51.289 Maximum Source Range Count: 128 00:07:51.289 NGUID/EUI64 Never Reused: No 00:07:51.289 Namespace Write Protected: No 00:07:51.289 Number of LBA Formats: 8 00:07:51.289 Current LBA Format: LBA Format #04 00:07:51.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.289 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.289 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.289 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.289 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.289 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.289 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.289 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.289 00:07:51.289 NVM Specific Namespace Data 00:07:51.289 =========================== 00:07:51.289 Logical Block Storage Tag Mask: 0 00:07:51.289 Protection Information Capabilities: 00:07:51.289 16b Guard Protection Information Storage Tag Support: No 00:07:51.289 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.289 Storage Tag Check Read Support: No 00:07:51.289 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.289 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.289 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.289 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.289 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.289 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.289 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.289 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.289 Namespace ID:3 00:07:51.289 Error Recovery Timeout: Unlimited 00:07:51.289 Command Set Identifier: NVM (00h) 00:07:51.289 Deallocate: Supported 00:07:51.289 Deallocated/Unwritten Error: Supported 00:07:51.289 Deallocated Read Value: All 0x00 00:07:51.289 Deallocate in Write Zeroes: Not Supported 00:07:51.289 Deallocated Guard Field: 0xFFFF 00:07:51.289 Flush: Supported 00:07:51.289 Reservation: Not Supported 00:07:51.289 Namespace Sharing Capabilities: Private 00:07:51.289 Size (in LBAs): 1048576 (4GiB) 00:07:51.289 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.289 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.289 Thin Provisioning: Not Supported 00:07:51.289 Per-NS Atomic Units: No 00:07:51.289 Maximum Single Source Range Length: 128 00:07:51.289 Maximum Copy Length: 128 00:07:51.289 Maximum Source Range Count: 128 00:07:51.289 NGUID/EUI64 Never Reused: No 00:07:51.289 Namespace Write Protected: No 00:07:51.289 Number of LBA Formats: 8 00:07:51.289 Current LBA Format: LBA Format #04 00:07:51.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.289 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.289 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.289 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.289 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.289 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.289 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.289 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.289 00:07:51.289 NVM Specific Namespace Data 00:07:51.289 =========================== 00:07:51.289 Logical Block Storage Tag Mask: 0 00:07:51.289 Protection Information Capabilities: 00:07:51.289 16b Guard Protection Information Storage Tag Support: No 00:07:51.289 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.547 Storage Tag Check Read Support: No 00:07:51.547 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.547 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.547 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.547 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.547 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.547 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.547 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.547 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.547 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.547 06:00:16 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:51.547 ===================================================== 00:07:51.547 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.547 ===================================================== 00:07:51.547 Controller Capabilities/Features 00:07:51.547 ================================ 00:07:51.547 Vendor ID: 1b36 00:07:51.547 Subsystem Vendor ID: 1af4 00:07:51.547 Serial Number: 12343 00:07:51.547 Model Number: QEMU NVMe Ctrl 00:07:51.547 Firmware Version: 8.0.0 00:07:51.547 Recommended Arb Burst: 6 00:07:51.547 IEEE OUI Identifier: 00 54 52 00:07:51.547 Multi-path I/O 00:07:51.547 May have multiple subsystem ports: No 00:07:51.547 May have multiple controllers: Yes 00:07:51.547 Associated with SR-IOV VF: No 00:07:51.547 Max Data Transfer Size: 524288 00:07:51.547 Max Number of Namespaces: 256 00:07:51.547 Max Number of I/O Queues: 64 00:07:51.547 NVMe Specification Version (VS): 1.4 00:07:51.547 NVMe Specification Version (Identify): 1.4 00:07:51.547 Maximum Queue Entries: 2048 00:07:51.547 Contiguous Queues Required: Yes 00:07:51.547 Arbitration Mechanisms Supported 00:07:51.547 Weighted Round Robin: Not Supported 00:07:51.547 Vendor Specific: Not Supported 00:07:51.547 Reset Timeout: 7500 ms 00:07:51.547 Doorbell Stride: 4 bytes 00:07:51.547 NVM Subsystem Reset: Not Supported 00:07:51.547 Command Sets Supported 00:07:51.547 NVM Command Set: Supported 00:07:51.547 Boot Partition: Not Supported 00:07:51.547 Memory Page Size Minimum: 4096 bytes 00:07:51.547 Memory Page Size Maximum: 65536 bytes 00:07:51.547 Persistent Memory Region: Not Supported 00:07:51.547 Optional Asynchronous Events Supported 00:07:51.547 Namespace Attribute Notices: Supported 00:07:51.547 Firmware Activation Notices: Not Supported 00:07:51.547 ANA Change Notices: Not Supported 00:07:51.547 PLE Aggregate Log Change Notices: Not Supported 00:07:51.547 LBA Status Info Alert Notices: Not Supported 00:07:51.547 EGE Aggregate Log Change Notices: Not Supported 00:07:51.547 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.547 Zone Descriptor Change Notices: Not Supported 00:07:51.547 Discovery Log Change Notices: Not Supported 00:07:51.547 Controller Attributes 00:07:51.547 128-bit Host Identifier: Not Supported 00:07:51.548 Non-Operational Permissive Mode: Not Supported 00:07:51.548 NVM Sets: Not Supported 00:07:51.548 Read Recovery Levels: Not Supported 00:07:51.548 Endurance Groups: Supported 00:07:51.548 Predictable Latency Mode: Not Supported 00:07:51.548 Traffic Based Keep ALive: Not Supported 00:07:51.548 Namespace Granularity: Not Supported 00:07:51.548 SQ Associations: Not Supported 00:07:51.548 UUID List: Not Supported 00:07:51.548 Multi-Domain Subsystem: Not Supported 00:07:51.548 Fixed Capacity Management: Not Supported 00:07:51.548 Variable Capacity Management: Not Supported 00:07:51.548 Delete Endurance Group: Not Supported 00:07:51.548 Delete NVM Set: Not Supported 00:07:51.548 Extended LBA Formats Supported: Supported 00:07:51.548 Flexible Data Placement Supported: Supported 00:07:51.548 00:07:51.548 Controller Memory Buffer Support 00:07:51.548 ================================ 00:07:51.548 Supported: No 00:07:51.548 00:07:51.548 Persistent Memory Region Support 00:07:51.548 ================================ 00:07:51.548 Supported: No 00:07:51.548 00:07:51.548 Admin Command Set Attributes 00:07:51.548 ============================ 00:07:51.548 Security Send/Receive: Not Supported 00:07:51.548 Format NVM: Supported 00:07:51.548 Firmware Activate/Download: Not Supported 00:07:51.548 Namespace Management: Supported 00:07:51.548 Device Self-Test: Not Supported 00:07:51.548 Directives: Supported 00:07:51.548 NVMe-MI: Not Supported 00:07:51.548 Virtualization Management: Not Supported 00:07:51.548 Doorbell Buffer Config: Supported 00:07:51.548 Get LBA Status Capability: Not Supported 00:07:51.548 Command & Feature Lockdown Capability: Not Supported 00:07:51.548 Abort Command Limit: 4 00:07:51.548 Async Event Request Limit: 4 00:07:51.548 Number of Firmware Slots: N/A 00:07:51.548 Firmware Slot 1 Read-Only: N/A 00:07:51.548 Firmware Activation Without Reset: N/A 00:07:51.548 Multiple Update Detection Support: N/A 00:07:51.548 Firmware Update Granularity: No Information Provided 00:07:51.548 Per-Namespace SMART Log: Yes 00:07:51.548 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.548 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:51.548 Command Effects Log Page: Supported 00:07:51.548 Get Log Page Extended Data: Supported 00:07:51.548 Telemetry Log Pages: Not Supported 00:07:51.548 Persistent Event Log Pages: Not Supported 00:07:51.548 Supported Log Pages Log Page: May Support 00:07:51.548 Commands Supported & Effects Log Page: Not Supported 00:07:51.548 Feature Identifiers & Effects Log Page:May Support 00:07:51.548 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.548 Data Area 4 for Telemetry Log: Not Supported 00:07:51.548 Error Log Page Entries Supported: 1 00:07:51.548 Keep Alive: Not Supported 00:07:51.548 00:07:51.548 NVM Command Set Attributes 00:07:51.548 ========================== 00:07:51.548 Submission Queue Entry Size 00:07:51.548 Max: 64 00:07:51.548 Min: 64 00:07:51.548 Completion Queue Entry Size 00:07:51.548 Max: 16 00:07:51.548 Min: 16 00:07:51.548 Number of Namespaces: 256 00:07:51.548 Compare Command: Supported 00:07:51.548 Write Uncorrectable Command: Not Supported 00:07:51.548 Dataset Management Command: Supported 00:07:51.548 Write Zeroes Command: Supported 00:07:51.548 Set Features Save Field: Supported 00:07:51.548 Reservations: Not Supported 00:07:51.548 Timestamp: Supported 00:07:51.548 Copy: Supported 00:07:51.548 Volatile Write Cache: Present 00:07:51.548 Atomic Write Unit (Normal): 1 00:07:51.548 Atomic Write Unit (PFail): 1 00:07:51.548 Atomic Compare & Write Unit: 1 00:07:51.548 Fused Compare & Write: Not Supported 00:07:51.548 Scatter-Gather List 00:07:51.548 SGL Command Set: Supported 00:07:51.548 SGL Keyed: Not Supported 00:07:51.548 SGL Bit Bucket Descriptor: Not Supported 00:07:51.548 SGL Metadata Pointer: Not Supported 00:07:51.548 Oversized SGL: Not Supported 00:07:51.548 SGL Metadata Address: Not Supported 00:07:51.548 SGL Offset: Not Supported 00:07:51.548 Transport SGL Data Block: Not Supported 00:07:51.548 Replay Protected Memory Block: Not Supported 00:07:51.548 00:07:51.548 Firmware Slot Information 00:07:51.548 ========================= 00:07:51.548 Active slot: 1 00:07:51.548 Slot 1 Firmware Revision: 1.0 00:07:51.548 00:07:51.548 00:07:51.548 Commands Supported and Effects 00:07:51.548 ============================== 00:07:51.548 Admin Commands 00:07:51.548 -------------- 00:07:51.548 Delete I/O Submission Queue (00h): Supported 00:07:51.548 Create I/O Submission Queue (01h): Supported 00:07:51.548 Get Log Page (02h): Supported 00:07:51.548 Delete I/O Completion Queue (04h): Supported 00:07:51.548 Create I/O Completion Queue (05h): Supported 00:07:51.548 Identify (06h): Supported 00:07:51.548 Abort (08h): Supported 00:07:51.548 Set Features (09h): Supported 00:07:51.548 Get Features (0Ah): Supported 00:07:51.548 Asynchronous Event Request (0Ch): Supported 00:07:51.548 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.548 Directive Send (19h): Supported 00:07:51.548 Directive Receive (1Ah): Supported 00:07:51.548 Virtualization Management (1Ch): Supported 00:07:51.548 Doorbell Buffer Config (7Ch): Supported 00:07:51.548 Format NVM (80h): Supported LBA-Change 00:07:51.548 I/O Commands 00:07:51.548 ------------ 00:07:51.548 Flush (00h): Supported LBA-Change 00:07:51.548 Write (01h): Supported LBA-Change 00:07:51.548 Read (02h): Supported 00:07:51.548 Compare (05h): Supported 00:07:51.548 Write Zeroes (08h): Supported LBA-Change 00:07:51.548 Dataset Management (09h): Supported LBA-Change 00:07:51.548 Unknown (0Ch): Supported 00:07:51.548 Unknown (12h): Supported 00:07:51.548 Copy (19h): Supported LBA-Change 00:07:51.548 Unknown (1Dh): Supported LBA-Change 00:07:51.548 00:07:51.548 Error Log 00:07:51.548 ========= 00:07:51.548 00:07:51.548 Arbitration 00:07:51.548 =========== 00:07:51.548 Arbitration Burst: no limit 00:07:51.548 00:07:51.548 Power Management 00:07:51.548 ================ 00:07:51.548 Number of Power States: 1 00:07:51.548 Current Power State: Power State #0 00:07:51.548 Power State #0: 00:07:51.548 Max Power: 25.00 W 00:07:51.548 Non-Operational State: Operational 00:07:51.548 Entry Latency: 16 microseconds 00:07:51.548 Exit Latency: 4 microseconds 00:07:51.548 Relative Read Throughput: 0 00:07:51.548 Relative Read Latency: 0 00:07:51.548 Relative Write Throughput: 0 00:07:51.548 Relative Write Latency: 0 00:07:51.548 Idle Power: Not Reported 00:07:51.548 Active Power: Not Reported 00:07:51.548 Non-Operational Permissive Mode: Not Supported 00:07:51.548 00:07:51.548 Health Information 00:07:51.548 ================== 00:07:51.548 Critical Warnings: 00:07:51.548 Available Spare Space: OK 00:07:51.548 Temperature: OK 00:07:51.548 Device Reliability: OK 00:07:51.548 Read Only: No 00:07:51.548 Volatile Memory Backup: OK 00:07:51.548 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.548 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.548 Available Spare: 0% 00:07:51.548 Available Spare Threshold: 0% 00:07:51.548 Life Percentage Used: 0% 00:07:51.548 Data Units Read: 863 00:07:51.548 Data Units Written: 792 00:07:51.548 Host Read Commands: 39335 00:07:51.548 Host Write Commands: 38758 00:07:51.548 Controller Busy Time: 0 minutes 00:07:51.548 Power Cycles: 0 00:07:51.548 Power On Hours: 0 hours 00:07:51.548 Unsafe Shutdowns: 0 00:07:51.548 Unrecoverable Media Errors: 0 00:07:51.548 Lifetime Error Log Entries: 0 00:07:51.548 Warning Temperature Time: 0 minutes 00:07:51.548 Critical Temperature Time: 0 minutes 00:07:51.548 00:07:51.548 Number of Queues 00:07:51.548 ================ 00:07:51.548 Number of I/O Submission Queues: 64 00:07:51.548 Number of I/O Completion Queues: 64 00:07:51.548 00:07:51.548 ZNS Specific Controller Data 00:07:51.548 ============================ 00:07:51.548 Zone Append Size Limit: 0 00:07:51.548 00:07:51.548 00:07:51.548 Active Namespaces 00:07:51.549 ================= 00:07:51.549 Namespace ID:1 00:07:51.549 Error Recovery Timeout: Unlimited 00:07:51.549 Command Set Identifier: NVM (00h) 00:07:51.549 Deallocate: Supported 00:07:51.549 Deallocated/Unwritten Error: Supported 00:07:51.549 Deallocated Read Value: All 0x00 00:07:51.549 Deallocate in Write Zeroes: Not Supported 00:07:51.549 Deallocated Guard Field: 0xFFFF 00:07:51.549 Flush: Supported 00:07:51.549 Reservation: Not Supported 00:07:51.549 Namespace Sharing Capabilities: Multiple Controllers 00:07:51.549 Size (in LBAs): 262144 (1GiB) 00:07:51.549 Capacity (in LBAs): 262144 (1GiB) 00:07:51.549 Utilization (in LBAs): 262144 (1GiB) 00:07:51.549 Thin Provisioning: Not Supported 00:07:51.549 Per-NS Atomic Units: No 00:07:51.549 Maximum Single Source Range Length: 128 00:07:51.549 Maximum Copy Length: 128 00:07:51.549 Maximum Source Range Count: 128 00:07:51.549 NGUID/EUI64 Never Reused: No 00:07:51.549 Namespace Write Protected: No 00:07:51.549 Endurance group ID: 1 00:07:51.549 Number of LBA Formats: 8 00:07:51.549 Current LBA Format: LBA Format #04 00:07:51.549 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.549 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.549 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.549 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.549 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.549 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.549 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.549 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.549 00:07:51.549 Get Feature FDP: 00:07:51.549 ================ 00:07:51.549 Enabled: Yes 00:07:51.549 FDP configuration index: 0 00:07:51.549 00:07:51.549 FDP configurations log page 00:07:51.549 =========================== 00:07:51.549 Number of FDP configurations: 1 00:07:51.549 Version: 0 00:07:51.549 Size: 112 00:07:51.549 FDP Configuration Descriptor: 0 00:07:51.549 Descriptor Size: 96 00:07:51.549 Reclaim Group Identifier format: 2 00:07:51.549 FDP Volatile Write Cache: Not Present 00:07:51.549 FDP Configuration: Valid 00:07:51.549 Vendor Specific Size: 0 00:07:51.549 Number of Reclaim Groups: 2 00:07:51.549 Number of Recalim Unit Handles: 8 00:07:51.549 Max Placement Identifiers: 128 00:07:51.549 Number of Namespaces Suppprted: 256 00:07:51.549 Reclaim unit Nominal Size: 6000000 bytes 00:07:51.549 Estimated Reclaim Unit Time Limit: Not Reported 00:07:51.549 RUH Desc #000: RUH Type: Initially Isolated 00:07:51.549 RUH Desc #001: RUH Type: Initially Isolated 00:07:51.549 RUH Desc #002: RUH Type: Initially Isolated 00:07:51.549 RUH Desc #003: RUH Type: Initially Isolated 00:07:51.549 RUH Desc #004: RUH Type: Initially Isolated 00:07:51.549 RUH Desc #005: RUH Type: Initially Isolated 00:07:51.549 RUH Desc #006: RUH Type: Initially Isolated 00:07:51.549 RUH Desc #007: RUH Type: Initially Isolated 00:07:51.549 00:07:51.549 FDP reclaim unit handle usage log page 00:07:51.549 ====================================== 00:07:51.549 Number of Reclaim Unit Handles: 8 00:07:51.549 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:51.549 RUH Usage Desc #001: RUH Attributes: Unused 00:07:51.549 RUH Usage Desc #002: RUH Attributes: Unused 00:07:51.549 RUH Usage Desc #003: RUH Attributes: Unused 00:07:51.549 RUH Usage Desc #004: RUH Attributes: Unused 00:07:51.549 RUH Usage Desc #005: RUH Attributes: Unused 00:07:51.549 RUH Usage Desc #006: RUH Attributes: Unused 00:07:51.549 RUH Usage Desc #007: RUH Attributes: Unused 00:07:51.549 00:07:51.549 FDP statistics log page 00:07:51.549 ======================= 00:07:51.549 Host bytes with metadata written: 498696192 00:07:51.549 Media bytes with metadata written: 498774016 00:07:51.549 Media bytes erased: 0 00:07:51.549 00:07:51.549 FDP events log page 00:07:51.549 =================== 00:07:51.549 Number of FDP events: 0 00:07:51.549 00:07:51.549 NVM Specific Namespace Data 00:07:51.549 =========================== 00:07:51.549 Logical Block Storage Tag Mask: 0 00:07:51.549 Protection Information Capabilities: 00:07:51.549 16b Guard Protection Information Storage Tag Support: No 00:07:51.549 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.549 Storage Tag Check Read Support: No 00:07:51.549 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.549 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.549 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.549 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.549 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.549 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.549 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.549 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.549 00:07:51.549 real 0m1.031s 00:07:51.549 user 0m0.378s 00:07:51.549 sys 0m0.422s 00:07:51.549 06:00:17 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.549 06:00:17 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:51.549 ************************************ 00:07:51.549 END TEST nvme_identify 00:07:51.549 ************************************ 00:07:51.549 06:00:17 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:51.549 06:00:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.549 06:00:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.549 06:00:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.549 ************************************ 00:07:51.549 START TEST nvme_perf 00:07:51.549 ************************************ 00:07:51.549 06:00:17 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:51.549 06:00:17 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:52.922 Initializing NVMe Controllers 00:07:52.922 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.922 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.922 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.922 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.922 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:52.922 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:52.922 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:52.922 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:52.922 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:52.922 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:52.922 Initialization complete. Launching workers. 00:07:52.922 ======================================================== 00:07:52.922 Latency(us) 00:07:52.922 Device Information : IOPS MiB/s Average min max 00:07:52.922 PCIE (0000:00:13.0) NSID 1 from core 0: 16866.90 197.66 7593.73 5401.43 32456.48 00:07:52.922 PCIE (0000:00:10.0) NSID 1 from core 0: 16866.90 197.66 7587.87 5116.84 31918.04 00:07:52.922 PCIE (0000:00:11.0) NSID 1 from core 0: 16866.90 197.66 7582.76 4990.08 31286.20 00:07:52.922 PCIE (0000:00:12.0) NSID 1 from core 0: 16866.90 197.66 7576.29 4188.23 31257.67 00:07:52.922 PCIE (0000:00:12.0) NSID 2 from core 0: 16866.90 197.66 7570.14 4015.42 30989.79 00:07:52.922 PCIE (0000:00:12.0) NSID 3 from core 0: 16866.90 197.66 7563.90 3903.28 30322.36 00:07:52.922 ======================================================== 00:07:52.922 Total : 101201.43 1185.95 7579.12 3903.28 32456.48 00:07:52.922 00:07:52.922 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:52.922 ================================================================================= 00:07:52.922 1.00000% : 5948.652us 00:07:52.922 10.00000% : 6377.157us 00:07:52.922 25.00000% : 6654.425us 00:07:52.922 50.00000% : 6956.898us 00:07:52.922 75.00000% : 7461.022us 00:07:52.922 90.00000% : 9376.689us 00:07:52.922 95.00000% : 11594.831us 00:07:52.922 98.00000% : 15426.166us 00:07:52.922 99.00000% : 16232.763us 00:07:52.922 99.50000% : 23088.837us 00:07:52.922 99.90000% : 32263.877us 00:07:52.922 99.99000% : 32465.526us 00:07:52.922 99.99900% : 32465.526us 00:07:52.922 99.99990% : 32465.526us 00:07:52.922 99.99999% : 32465.526us 00:07:52.922 00:07:52.922 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:52.922 ================================================================================= 00:07:52.922 1.00000% : 5873.034us 00:07:52.922 10.00000% : 6301.538us 00:07:52.922 25.00000% : 6604.012us 00:07:52.922 50.00000% : 6956.898us 00:07:52.922 75.00000% : 7511.434us 00:07:52.922 90.00000% : 9275.865us 00:07:52.922 95.00000% : 11494.006us 00:07:52.922 98.00000% : 15325.342us 00:07:52.922 99.00000% : 16333.588us 00:07:52.922 99.50000% : 26012.751us 00:07:52.922 99.90000% : 31658.929us 00:07:52.922 99.99000% : 32062.228us 00:07:52.922 99.99900% : 32062.228us 00:07:52.922 99.99990% : 32062.228us 00:07:52.922 99.99999% : 32062.228us 00:07:52.922 00:07:52.922 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:52.922 ================================================================================= 00:07:52.922 1.00000% : 5923.446us 00:07:52.922 10.00000% : 6326.745us 00:07:52.922 25.00000% : 6604.012us 00:07:52.922 50.00000% : 6956.898us 00:07:52.922 75.00000% : 7461.022us 00:07:52.922 90.00000% : 9175.040us 00:07:52.922 95.00000% : 11494.006us 00:07:52.922 98.00000% : 15123.692us 00:07:52.922 99.00000% : 16736.886us 00:07:52.922 99.50000% : 26012.751us 00:07:52.922 99.90000% : 31053.982us 00:07:52.922 99.99000% : 31457.280us 00:07:52.922 99.99900% : 31457.280us 00:07:52.922 99.99990% : 31457.280us 00:07:52.922 99.99999% : 31457.280us 00:07:52.922 00:07:52.922 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:52.922 ================================================================================= 00:07:52.922 1.00000% : 5898.240us 00:07:52.922 10.00000% : 6326.745us 00:07:52.922 25.00000% : 6604.012us 00:07:52.922 50.00000% : 6956.898us 00:07:52.922 75.00000% : 7461.022us 00:07:52.922 90.00000% : 9225.452us 00:07:52.922 95.00000% : 11846.892us 00:07:52.922 98.00000% : 15022.868us 00:07:52.922 99.00000% : 16736.886us 00:07:52.922 99.50000% : 26617.698us 00:07:52.922 99.90000% : 31053.982us 00:07:52.922 99.99000% : 31255.631us 00:07:52.922 99.99900% : 31457.280us 00:07:52.922 99.99990% : 31457.280us 00:07:52.922 99.99999% : 31457.280us 00:07:52.922 00:07:52.922 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:52.922 ================================================================================= 00:07:52.922 1.00000% : 5923.446us 00:07:52.922 10.00000% : 6326.745us 00:07:52.922 25.00000% : 6604.012us 00:07:52.922 50.00000% : 6956.898us 00:07:52.922 75.00000% : 7410.609us 00:07:52.922 90.00000% : 9225.452us 00:07:52.922 95.00000% : 11897.305us 00:07:52.922 98.00000% : 15325.342us 00:07:52.922 99.00000% : 16535.237us 00:07:52.922 99.50000% : 25811.102us 00:07:52.922 99.90000% : 30852.332us 00:07:52.922 99.99000% : 31053.982us 00:07:52.922 99.99900% : 31053.982us 00:07:52.922 99.99990% : 31053.982us 00:07:52.922 99.99999% : 31053.982us 00:07:52.922 00:07:52.922 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:52.922 ================================================================================= 00:07:52.922 1.00000% : 5898.240us 00:07:52.922 10.00000% : 6351.951us 00:07:52.922 25.00000% : 6604.012us 00:07:52.922 50.00000% : 6956.898us 00:07:52.922 75.00000% : 7461.022us 00:07:52.922 90.00000% : 9326.277us 00:07:52.922 95.00000% : 11594.831us 00:07:52.922 98.00000% : 15123.692us 00:07:52.922 99.00000% : 16434.412us 00:07:52.922 99.50000% : 25306.978us 00:07:52.922 99.90000% : 30247.385us 00:07:52.922 99.99000% : 30449.034us 00:07:52.922 99.99900% : 30449.034us 00:07:52.922 99.99990% : 30449.034us 00:07:52.922 99.99999% : 30449.034us 00:07:52.922 00:07:52.922 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:52.922 ============================================================================== 00:07:52.922 Range in us Cumulative IO count 00:07:52.922 5394.117 - 5419.323: 0.0355% ( 6) 00:07:52.922 5419.323 - 5444.529: 0.0473% ( 2) 00:07:52.922 5444.529 - 5469.735: 0.0533% ( 1) 00:07:52.922 5469.735 - 5494.942: 0.0592% ( 1) 00:07:52.922 5494.942 - 5520.148: 0.0769% ( 3) 00:07:52.922 5520.148 - 5545.354: 0.1006% ( 4) 00:07:52.922 5545.354 - 5570.560: 0.1125% ( 2) 00:07:52.922 5570.560 - 5595.766: 0.1243% ( 2) 00:07:52.922 5595.766 - 5620.972: 0.1361% ( 2) 00:07:52.922 5620.972 - 5646.178: 0.1539% ( 3) 00:07:52.922 5646.178 - 5671.385: 0.1716% ( 3) 00:07:52.922 5671.385 - 5696.591: 0.1835% ( 2) 00:07:52.922 5696.591 - 5721.797: 0.2012% ( 3) 00:07:52.922 5721.797 - 5747.003: 0.2308% ( 5) 00:07:52.922 5747.003 - 5772.209: 0.2604% ( 5) 00:07:52.922 5772.209 - 5797.415: 0.3433% ( 14) 00:07:52.922 5797.415 - 5822.622: 0.4321% ( 15) 00:07:52.922 5822.622 - 5847.828: 0.5149% ( 14) 00:07:52.922 5847.828 - 5873.034: 0.6688% ( 26) 00:07:52.922 5873.034 - 5898.240: 0.7872% ( 20) 00:07:52.922 5898.240 - 5923.446: 0.9115% ( 21) 00:07:52.923 5923.446 - 5948.652: 1.1364% ( 38) 00:07:52.923 5948.652 - 5973.858: 1.3080% ( 29) 00:07:52.923 5973.858 - 5999.065: 1.5743% ( 45) 00:07:52.923 5999.065 - 6024.271: 1.8111% ( 40) 00:07:52.923 6024.271 - 6049.477: 2.1248% ( 53) 00:07:52.923 6049.477 - 6074.683: 2.4029% ( 47) 00:07:52.923 6074.683 - 6099.889: 2.8113% ( 69) 00:07:52.923 6099.889 - 6125.095: 3.2079% ( 67) 00:07:52.923 6125.095 - 6150.302: 3.6695% ( 78) 00:07:52.923 6150.302 - 6175.508: 4.2554% ( 99) 00:07:52.923 6175.508 - 6200.714: 4.7644% ( 86) 00:07:52.923 6200.714 - 6225.920: 5.2320% ( 79) 00:07:52.923 6225.920 - 6251.126: 5.8653% ( 107) 00:07:52.923 6251.126 - 6276.332: 6.5459% ( 115) 00:07:52.923 6276.332 - 6301.538: 7.3035% ( 128) 00:07:52.923 6301.538 - 6326.745: 8.0552% ( 127) 00:07:52.923 6326.745 - 6351.951: 9.0258% ( 164) 00:07:52.923 6351.951 - 6377.157: 10.0379% ( 171) 00:07:52.923 6377.157 - 6402.363: 11.2689% ( 208) 00:07:52.923 6402.363 - 6427.569: 12.5592% ( 218) 00:07:52.923 6427.569 - 6452.775: 14.0388% ( 250) 00:07:52.923 6452.775 - 6503.188: 17.1342% ( 523) 00:07:52.923 6503.188 - 6553.600: 20.8156% ( 622) 00:07:52.923 6553.600 - 6604.012: 24.7514% ( 665) 00:07:52.923 6604.012 - 6654.425: 29.2850% ( 766) 00:07:52.923 6654.425 - 6704.837: 33.3925% ( 694) 00:07:52.923 6704.837 - 6755.249: 37.4231% ( 681) 00:07:52.923 6755.249 - 6805.662: 41.3471% ( 663) 00:07:52.923 6805.662 - 6856.074: 45.0994% ( 634) 00:07:52.923 6856.074 - 6906.486: 48.7512% ( 617) 00:07:52.923 6906.486 - 6956.898: 52.2609% ( 593) 00:07:52.923 6956.898 - 7007.311: 55.5930% ( 563) 00:07:52.923 7007.311 - 7057.723: 58.7299% ( 530) 00:07:52.923 7057.723 - 7108.135: 61.7247% ( 506) 00:07:52.923 7108.135 - 7158.548: 64.5419% ( 476) 00:07:52.923 7158.548 - 7208.960: 67.0691% ( 427) 00:07:52.923 7208.960 - 7259.372: 69.2827% ( 374) 00:07:52.923 7259.372 - 7309.785: 71.2536% ( 333) 00:07:52.923 7309.785 - 7360.197: 73.0765% ( 308) 00:07:52.923 7360.197 - 7410.609: 74.5975% ( 257) 00:07:52.923 7410.609 - 7461.022: 75.8523% ( 212) 00:07:52.923 7461.022 - 7511.434: 76.9117% ( 179) 00:07:52.923 7511.434 - 7561.846: 77.9415% ( 174) 00:07:52.923 7561.846 - 7612.258: 78.8352% ( 151) 00:07:52.923 7612.258 - 7662.671: 79.6283% ( 134) 00:07:52.923 7662.671 - 7713.083: 80.3385% ( 120) 00:07:52.923 7713.083 - 7763.495: 80.9955% ( 111) 00:07:52.923 7763.495 - 7813.908: 81.6406% ( 109) 00:07:52.923 7813.908 - 7864.320: 82.1733% ( 90) 00:07:52.923 7864.320 - 7914.732: 82.5994% ( 72) 00:07:52.923 7914.732 - 7965.145: 83.0315% ( 73) 00:07:52.923 7965.145 - 8015.557: 83.4162% ( 65) 00:07:52.923 8015.557 - 8065.969: 83.8127% ( 67) 00:07:52.923 8065.969 - 8116.382: 84.2093% ( 67) 00:07:52.923 8116.382 - 8166.794: 84.5585% ( 59) 00:07:52.923 8166.794 - 8217.206: 84.8899% ( 56) 00:07:52.923 8217.206 - 8267.618: 85.2332% ( 58) 00:07:52.923 8267.618 - 8318.031: 85.6061% ( 63) 00:07:52.923 8318.031 - 8368.443: 85.9375% ( 56) 00:07:52.923 8368.443 - 8418.855: 86.2689% ( 56) 00:07:52.923 8418.855 - 8469.268: 86.5826% ( 53) 00:07:52.923 8469.268 - 8519.680: 86.8667% ( 48) 00:07:52.923 8519.680 - 8570.092: 87.1390% ( 46) 00:07:52.923 8570.092 - 8620.505: 87.3994% ( 44) 00:07:52.923 8620.505 - 8670.917: 87.6184% ( 37) 00:07:52.923 8670.917 - 8721.329: 87.8729% ( 43) 00:07:52.923 8721.329 - 8771.742: 88.1155% ( 41) 00:07:52.923 8771.742 - 8822.154: 88.3464% ( 39) 00:07:52.923 8822.154 - 8872.566: 88.5713% ( 38) 00:07:52.923 8872.566 - 8922.978: 88.7962% ( 38) 00:07:52.923 8922.978 - 8973.391: 89.0211% ( 38) 00:07:52.923 8973.391 - 9023.803: 89.2282% ( 35) 00:07:52.923 9023.803 - 9074.215: 89.3703% ( 24) 00:07:52.923 9074.215 - 9124.628: 89.5301% ( 27) 00:07:52.923 9124.628 - 9175.040: 89.6188% ( 15) 00:07:52.923 9175.040 - 9225.452: 89.7254% ( 18) 00:07:52.923 9225.452 - 9275.865: 89.8556% ( 22) 00:07:52.923 9275.865 - 9326.277: 89.9799% ( 21) 00:07:52.923 9326.277 - 9376.689: 90.0923% ( 19) 00:07:52.923 9376.689 - 9427.102: 90.2048% ( 19) 00:07:52.923 9427.102 - 9477.514: 90.3172% ( 19) 00:07:52.923 9477.514 - 9527.926: 90.4830% ( 28) 00:07:52.923 9527.926 - 9578.338: 90.6250% ( 24) 00:07:52.923 9578.338 - 9628.751: 90.7730% ( 25) 00:07:52.923 9628.751 - 9679.163: 90.8617% ( 15) 00:07:52.923 9679.163 - 9729.575: 90.9624% ( 17) 00:07:52.923 9729.575 - 9779.988: 91.0393% ( 13) 00:07:52.923 9779.988 - 9830.400: 91.1281% ( 15) 00:07:52.923 9830.400 - 9880.812: 91.1991% ( 12) 00:07:52.923 9880.812 - 9931.225: 91.2879% ( 15) 00:07:52.923 9931.225 - 9981.637: 91.3589% ( 12) 00:07:52.923 9981.637 - 10032.049: 91.4773% ( 20) 00:07:52.923 10032.049 - 10082.462: 91.6134% ( 23) 00:07:52.923 10082.462 - 10132.874: 91.7377% ( 21) 00:07:52.923 10132.874 - 10183.286: 91.9034% ( 28) 00:07:52.923 10183.286 - 10233.698: 92.0336% ( 22) 00:07:52.923 10233.698 - 10284.111: 92.1816% ( 25) 00:07:52.923 10284.111 - 10334.523: 92.2822% ( 17) 00:07:52.923 10334.523 - 10384.935: 92.4183% ( 23) 00:07:52.923 10384.935 - 10435.348: 92.5426% ( 21) 00:07:52.923 10435.348 - 10485.760: 92.6787% ( 23) 00:07:52.923 10485.760 - 10536.172: 92.7675% ( 15) 00:07:52.923 10536.172 - 10586.585: 92.8681% ( 17) 00:07:52.923 10586.585 - 10636.997: 92.9688% ( 17) 00:07:52.923 10636.997 - 10687.409: 93.0694% ( 17) 00:07:52.923 10687.409 - 10737.822: 93.1700% ( 17) 00:07:52.923 10737.822 - 10788.234: 93.2943% ( 21) 00:07:52.923 10788.234 - 10838.646: 93.4067% ( 19) 00:07:52.923 10838.646 - 10889.058: 93.5369% ( 22) 00:07:52.923 10889.058 - 10939.471: 93.6671% ( 22) 00:07:52.923 10939.471 - 10989.883: 93.7914% ( 21) 00:07:52.923 10989.883 - 11040.295: 93.8980% ( 18) 00:07:52.923 11040.295 - 11090.708: 94.0400% ( 24) 00:07:52.923 11090.708 - 11141.120: 94.2057% ( 28) 00:07:52.923 11141.120 - 11191.532: 94.4247% ( 37) 00:07:52.923 11191.532 - 11241.945: 94.5372% ( 19) 00:07:52.923 11241.945 - 11292.357: 94.5904% ( 9) 00:07:52.923 11292.357 - 11342.769: 94.6378% ( 8) 00:07:52.923 11342.769 - 11393.182: 94.7088% ( 12) 00:07:52.923 11393.182 - 11443.594: 94.8153% ( 18) 00:07:52.923 11443.594 - 11494.006: 94.8864% ( 12) 00:07:52.923 11494.006 - 11544.418: 94.9692% ( 14) 00:07:52.923 11544.418 - 11594.831: 95.0521% ( 14) 00:07:52.923 11594.831 - 11645.243: 95.1468% ( 16) 00:07:52.923 11645.243 - 11695.655: 95.2415% ( 16) 00:07:52.923 11695.655 - 11746.068: 95.3125% ( 12) 00:07:52.923 11746.068 - 11796.480: 95.4072% ( 16) 00:07:52.923 11796.480 - 11846.892: 95.4782% ( 12) 00:07:52.923 11846.892 - 11897.305: 95.5670% ( 15) 00:07:52.923 11897.305 - 11947.717: 95.6321% ( 11) 00:07:52.923 11947.717 - 11998.129: 95.7031% ( 12) 00:07:52.923 11998.129 - 12048.542: 95.7860% ( 14) 00:07:52.923 12048.542 - 12098.954: 95.8925% ( 18) 00:07:52.923 12098.954 - 12149.366: 95.9635% ( 12) 00:07:52.923 12149.366 - 12199.778: 96.0227% ( 10) 00:07:52.923 12199.778 - 12250.191: 96.0760% ( 9) 00:07:52.923 12250.191 - 12300.603: 96.1589% ( 14) 00:07:52.923 12300.603 - 12351.015: 96.1944% ( 6) 00:07:52.923 12351.015 - 12401.428: 96.2121% ( 3) 00:07:52.923 12401.428 - 12451.840: 96.2240% ( 2) 00:07:52.923 12451.840 - 12502.252: 96.2417% ( 3) 00:07:52.923 12502.252 - 12552.665: 96.2772% ( 6) 00:07:52.923 12552.665 - 12603.077: 96.2950% ( 3) 00:07:52.923 12603.077 - 12653.489: 96.3246% ( 5) 00:07:52.923 12653.489 - 12703.902: 96.3601% ( 6) 00:07:52.923 12703.902 - 12754.314: 96.4134% ( 9) 00:07:52.923 12754.314 - 12804.726: 96.4903% ( 13) 00:07:52.923 12804.726 - 12855.138: 96.5317% ( 7) 00:07:52.923 12855.138 - 12905.551: 96.6027% ( 12) 00:07:52.923 12905.551 - 13006.375: 96.7152% ( 19) 00:07:52.923 13006.375 - 13107.200: 96.7566% ( 7) 00:07:52.923 13107.200 - 13208.025: 96.8099% ( 9) 00:07:52.923 13208.025 - 13308.849: 96.8750% ( 11) 00:07:52.923 13308.849 - 13409.674: 96.9223% ( 8) 00:07:52.923 13409.674 - 13510.498: 96.9756% ( 9) 00:07:52.923 13510.498 - 13611.323: 97.0170% ( 7) 00:07:52.923 13611.323 - 13712.148: 97.0466% ( 5) 00:07:52.923 13712.148 - 13812.972: 97.0703% ( 4) 00:07:52.923 13812.972 - 13913.797: 97.0999% ( 5) 00:07:52.923 13913.797 - 14014.622: 97.1236% ( 4) 00:07:52.924 14014.622 - 14115.446: 97.1532% ( 5) 00:07:52.924 14115.446 - 14216.271: 97.1768% ( 4) 00:07:52.924 14216.271 - 14317.095: 97.2064% ( 5) 00:07:52.924 14317.095 - 14417.920: 97.2420% ( 6) 00:07:52.924 14417.920 - 14518.745: 97.2715% ( 5) 00:07:52.924 14518.745 - 14619.569: 97.3307% ( 10) 00:07:52.924 14619.569 - 14720.394: 97.3958% ( 11) 00:07:52.924 14720.394 - 14821.218: 97.4964% ( 17) 00:07:52.924 14821.218 - 14922.043: 97.5911% ( 16) 00:07:52.924 14922.043 - 15022.868: 97.6799% ( 15) 00:07:52.924 15022.868 - 15123.692: 97.7687% ( 15) 00:07:52.924 15123.692 - 15224.517: 97.8693% ( 17) 00:07:52.924 15224.517 - 15325.342: 97.9759% ( 18) 00:07:52.924 15325.342 - 15426.166: 98.0883% ( 19) 00:07:52.924 15426.166 - 15526.991: 98.2481% ( 27) 00:07:52.924 15526.991 - 15627.815: 98.3902% ( 24) 00:07:52.924 15627.815 - 15728.640: 98.5026% ( 19) 00:07:52.924 15728.640 - 15829.465: 98.5973% ( 16) 00:07:52.924 15829.465 - 15930.289: 98.7098% ( 19) 00:07:52.924 15930.289 - 16031.114: 98.8163% ( 18) 00:07:52.924 16031.114 - 16131.938: 98.9228% ( 18) 00:07:52.924 16131.938 - 16232.763: 99.0175% ( 16) 00:07:52.924 16232.763 - 16333.588: 99.0826% ( 11) 00:07:52.924 16333.588 - 16434.412: 99.1418% ( 10) 00:07:52.924 16434.412 - 16535.237: 99.2010% ( 10) 00:07:52.924 16535.237 - 16636.062: 99.2306% ( 5) 00:07:52.924 16636.062 - 16736.886: 99.2424% ( 2) 00:07:52.924 22080.591 - 22181.415: 99.2483% ( 1) 00:07:52.924 22181.415 - 22282.240: 99.2720% ( 4) 00:07:52.924 22282.240 - 22383.065: 99.3075% ( 6) 00:07:52.924 22383.065 - 22483.889: 99.3371% ( 5) 00:07:52.924 22483.889 - 22584.714: 99.3726% ( 6) 00:07:52.924 22584.714 - 22685.538: 99.4022% ( 5) 00:07:52.924 22685.538 - 22786.363: 99.4377% ( 6) 00:07:52.924 22786.363 - 22887.188: 99.4673% ( 5) 00:07:52.924 22887.188 - 22988.012: 99.4969% ( 5) 00:07:52.924 22988.012 - 23088.837: 99.5206% ( 4) 00:07:52.924 23088.837 - 23189.662: 99.5502% ( 5) 00:07:52.924 23189.662 - 23290.486: 99.5857% ( 6) 00:07:52.924 23290.486 - 23391.311: 99.6153% ( 5) 00:07:52.924 23391.311 - 23492.135: 99.6212% ( 1) 00:07:52.924 31255.631 - 31457.280: 99.6390% ( 3) 00:07:52.924 31457.280 - 31658.929: 99.7100% ( 12) 00:07:52.924 31658.929 - 31860.578: 99.7810% ( 12) 00:07:52.924 31860.578 - 32062.228: 99.8580% ( 13) 00:07:52.924 32062.228 - 32263.877: 99.9290% ( 12) 00:07:52.924 32263.877 - 32465.526: 100.0000% ( 12) 00:07:52.924 00:07:52.924 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:52.924 ============================================================================== 00:07:52.924 Range in us Cumulative IO count 00:07:52.924 5091.643 - 5116.849: 0.0059% ( 1) 00:07:52.924 5142.055 - 5167.262: 0.0178% ( 2) 00:07:52.924 5167.262 - 5192.468: 0.0473% ( 5) 00:07:52.924 5192.468 - 5217.674: 0.0533% ( 1) 00:07:52.924 5217.674 - 5242.880: 0.0592% ( 1) 00:07:52.924 5242.880 - 5268.086: 0.0651% ( 1) 00:07:52.924 5268.086 - 5293.292: 0.0769% ( 2) 00:07:52.924 5293.292 - 5318.498: 0.0829% ( 1) 00:07:52.924 5318.498 - 5343.705: 0.0947% ( 2) 00:07:52.924 5343.705 - 5368.911: 0.1006% ( 1) 00:07:52.924 5368.911 - 5394.117: 0.1125% ( 2) 00:07:52.924 5394.117 - 5419.323: 0.1184% ( 1) 00:07:52.924 5419.323 - 5444.529: 0.1302% ( 2) 00:07:52.924 5444.529 - 5469.735: 0.1420% ( 2) 00:07:52.924 5469.735 - 5494.942: 0.1539% ( 2) 00:07:52.924 5520.148 - 5545.354: 0.1776% ( 4) 00:07:52.924 5545.354 - 5570.560: 0.1894% ( 2) 00:07:52.924 5570.560 - 5595.766: 0.1953% ( 1) 00:07:52.924 5595.766 - 5620.972: 0.2071% ( 2) 00:07:52.924 5620.972 - 5646.178: 0.2190% ( 2) 00:07:52.924 5646.178 - 5671.385: 0.2367% ( 3) 00:07:52.924 5671.385 - 5696.591: 0.3018% ( 11) 00:07:52.924 5696.591 - 5721.797: 0.3788% ( 13) 00:07:52.924 5721.797 - 5747.003: 0.4616% ( 14) 00:07:52.924 5747.003 - 5772.209: 0.5386% ( 13) 00:07:52.924 5772.209 - 5797.415: 0.6274% ( 15) 00:07:52.924 5797.415 - 5822.622: 0.8227% ( 33) 00:07:52.924 5822.622 - 5847.828: 0.9706% ( 25) 00:07:52.924 5847.828 - 5873.034: 1.1304% ( 27) 00:07:52.924 5873.034 - 5898.240: 1.2784% ( 25) 00:07:52.924 5898.240 - 5923.446: 1.5329% ( 43) 00:07:52.924 5923.446 - 5948.652: 1.7637% ( 39) 00:07:52.924 5948.652 - 5973.858: 2.0064% ( 41) 00:07:52.924 5973.858 - 5999.065: 2.2431% ( 40) 00:07:52.924 5999.065 - 6024.271: 2.6574% ( 70) 00:07:52.924 6024.271 - 6049.477: 3.0836% ( 72) 00:07:52.924 6049.477 - 6074.683: 3.5926% ( 86) 00:07:52.924 6074.683 - 6099.889: 4.0661% ( 80) 00:07:52.924 6099.889 - 6125.095: 4.6993% ( 107) 00:07:52.924 6125.095 - 6150.302: 5.2794% ( 98) 00:07:52.924 6150.302 - 6175.508: 6.0192% ( 125) 00:07:52.924 6175.508 - 6200.714: 6.8359% ( 138) 00:07:52.924 6200.714 - 6225.920: 7.7356% ( 152) 00:07:52.924 6225.920 - 6251.126: 8.7180% ( 166) 00:07:52.924 6251.126 - 6276.332: 9.7834% ( 180) 00:07:52.924 6276.332 - 6301.538: 10.8369% ( 178) 00:07:52.924 6301.538 - 6326.745: 12.0147% ( 199) 00:07:52.924 6326.745 - 6351.951: 13.2221% ( 204) 00:07:52.924 6351.951 - 6377.157: 14.3821% ( 196) 00:07:52.924 6377.157 - 6402.363: 15.7019% ( 223) 00:07:52.924 6402.363 - 6427.569: 17.0159% ( 222) 00:07:52.924 6427.569 - 6452.775: 18.4008% ( 234) 00:07:52.924 6452.775 - 6503.188: 21.1589% ( 466) 00:07:52.924 6503.188 - 6553.600: 24.0471% ( 488) 00:07:52.924 6553.600 - 6604.012: 27.1011% ( 516) 00:07:52.924 6604.012 - 6654.425: 30.3918% ( 556) 00:07:52.924 6654.425 - 6704.837: 33.6529% ( 551) 00:07:52.924 6704.837 - 6755.249: 36.9022% ( 549) 00:07:52.924 6755.249 - 6805.662: 40.2403% ( 564) 00:07:52.924 6805.662 - 6856.074: 43.5310% ( 556) 00:07:52.924 6856.074 - 6906.486: 46.7034% ( 536) 00:07:52.924 6906.486 - 6956.898: 50.0533% ( 566) 00:07:52.924 6956.898 - 7007.311: 53.2315% ( 537) 00:07:52.924 7007.311 - 7057.723: 56.3506% ( 527) 00:07:52.924 7057.723 - 7108.135: 59.3809% ( 512) 00:07:52.924 7108.135 - 7158.548: 62.2159% ( 479) 00:07:52.924 7158.548 - 7208.960: 64.9444% ( 461) 00:07:52.924 7208.960 - 7259.372: 67.4775% ( 428) 00:07:52.924 7259.372 - 7309.785: 69.6674% ( 370) 00:07:52.924 7309.785 - 7360.197: 71.7211% ( 347) 00:07:52.924 7360.197 - 7410.609: 73.4020% ( 284) 00:07:52.924 7410.609 - 7461.022: 74.9112% ( 255) 00:07:52.924 7461.022 - 7511.434: 76.2192% ( 221) 00:07:52.924 7511.434 - 7561.846: 77.4029% ( 200) 00:07:52.924 7561.846 - 7612.258: 78.4209% ( 172) 00:07:52.924 7612.258 - 7662.671: 79.3205% ( 152) 00:07:52.924 7662.671 - 7713.083: 80.1610% ( 142) 00:07:52.924 7713.083 - 7763.495: 80.8120% ( 110) 00:07:52.924 7763.495 - 7813.908: 81.3980% ( 99) 00:07:52.924 7813.908 - 7864.320: 81.9010% ( 85) 00:07:52.924 7864.320 - 7914.732: 82.3272% ( 72) 00:07:52.924 7914.732 - 7965.145: 82.7415% ( 70) 00:07:52.924 7965.145 - 8015.557: 83.1143% ( 63) 00:07:52.924 8015.557 - 8065.969: 83.5227% ( 69) 00:07:52.924 8065.969 - 8116.382: 83.9311% ( 69) 00:07:52.924 8116.382 - 8166.794: 84.3513% ( 71) 00:07:52.924 8166.794 - 8217.206: 84.7360% ( 65) 00:07:52.924 8217.206 - 8267.618: 85.0971% ( 61) 00:07:52.924 8267.618 - 8318.031: 85.4403% ( 58) 00:07:52.925 8318.031 - 8368.443: 85.7659% ( 55) 00:07:52.925 8368.443 - 8418.855: 86.0440% ( 47) 00:07:52.925 8418.855 - 8469.268: 86.3518% ( 52) 00:07:52.925 8469.268 - 8519.680: 86.6122% ( 44) 00:07:52.925 8519.680 - 8570.092: 86.8608% ( 42) 00:07:52.925 8570.092 - 8620.505: 87.1094% ( 42) 00:07:52.925 8620.505 - 8670.917: 87.3224% ( 36) 00:07:52.925 8670.917 - 8721.329: 87.5888% ( 45) 00:07:52.925 8721.329 - 8771.742: 87.8196% ( 39) 00:07:52.925 8771.742 - 8822.154: 88.0327% ( 36) 00:07:52.925 8822.154 - 8872.566: 88.2812% ( 42) 00:07:52.925 8872.566 - 8922.978: 88.5298% ( 42) 00:07:52.925 8922.978 - 8973.391: 88.7547% ( 38) 00:07:52.925 8973.391 - 9023.803: 88.9915% ( 40) 00:07:52.925 9023.803 - 9074.215: 89.1986% ( 35) 00:07:52.925 9074.215 - 9124.628: 89.4058% ( 35) 00:07:52.925 9124.628 - 9175.040: 89.6188% ( 36) 00:07:52.925 9175.040 - 9225.452: 89.7609% ( 24) 00:07:52.925 9225.452 - 9275.865: 90.0272% ( 45) 00:07:52.925 9275.865 - 9326.277: 90.2225% ( 33) 00:07:52.925 9326.277 - 9376.689: 90.3764% ( 26) 00:07:52.925 9376.689 - 9427.102: 90.5481% ( 29) 00:07:52.925 9427.102 - 9477.514: 90.7138% ( 28) 00:07:52.925 9477.514 - 9527.926: 90.8440% ( 22) 00:07:52.925 9527.926 - 9578.338: 90.9446% ( 17) 00:07:52.925 9578.338 - 9628.751: 91.0926% ( 25) 00:07:52.925 9628.751 - 9679.163: 91.1873% ( 16) 00:07:52.925 9679.163 - 9729.575: 91.2760% ( 15) 00:07:52.925 9729.575 - 9779.988: 91.3707% ( 16) 00:07:52.925 9779.988 - 9830.400: 91.4654% ( 16) 00:07:52.925 9830.400 - 9880.812: 91.5542% ( 15) 00:07:52.925 9880.812 - 9931.225: 91.6371% ( 14) 00:07:52.925 9931.225 - 9981.637: 91.7377% ( 17) 00:07:52.925 9981.637 - 10032.049: 91.8324% ( 16) 00:07:52.925 10032.049 - 10082.462: 91.9212% ( 15) 00:07:52.925 10082.462 - 10132.874: 92.0573% ( 23) 00:07:52.925 10132.874 - 10183.286: 92.1934% ( 23) 00:07:52.925 10183.286 - 10233.698: 92.3355% ( 24) 00:07:52.925 10233.698 - 10284.111: 92.4124% ( 13) 00:07:52.925 10284.111 - 10334.523: 92.5545% ( 24) 00:07:52.925 10334.523 - 10384.935: 92.7024% ( 25) 00:07:52.925 10384.935 - 10435.348: 92.8504% ( 25) 00:07:52.925 10435.348 - 10485.760: 92.9806% ( 22) 00:07:52.925 10485.760 - 10536.172: 93.0990% ( 20) 00:07:52.925 10536.172 - 10586.585: 93.2055% ( 18) 00:07:52.925 10586.585 - 10636.997: 93.3239% ( 20) 00:07:52.925 10636.997 - 10687.409: 93.4245% ( 17) 00:07:52.925 10687.409 - 10737.822: 93.5606% ( 23) 00:07:52.925 10737.822 - 10788.234: 93.6908% ( 22) 00:07:52.925 10788.234 - 10838.646: 93.8447% ( 26) 00:07:52.925 10838.646 - 10889.058: 93.9394% ( 16) 00:07:52.925 10889.058 - 10939.471: 94.0578% ( 20) 00:07:52.925 10939.471 - 10989.883: 94.1821% ( 21) 00:07:52.925 10989.883 - 11040.295: 94.2649% ( 14) 00:07:52.925 11040.295 - 11090.708: 94.4070% ( 24) 00:07:52.925 11090.708 - 11141.120: 94.4721% ( 11) 00:07:52.925 11141.120 - 11191.532: 94.5372% ( 11) 00:07:52.925 11191.532 - 11241.945: 94.6141% ( 13) 00:07:52.925 11241.945 - 11292.357: 94.6733% ( 10) 00:07:52.925 11292.357 - 11342.769: 94.7562% ( 14) 00:07:52.925 11342.769 - 11393.182: 94.8449% ( 15) 00:07:52.925 11393.182 - 11443.594: 94.9751% ( 22) 00:07:52.925 11443.594 - 11494.006: 95.0639% ( 15) 00:07:52.925 11494.006 - 11544.418: 95.1882% ( 21) 00:07:52.925 11544.418 - 11594.831: 95.2888% ( 17) 00:07:52.925 11594.831 - 11645.243: 95.3717% ( 14) 00:07:52.925 11645.243 - 11695.655: 95.4841% ( 19) 00:07:52.925 11695.655 - 11746.068: 95.6025% ( 20) 00:07:52.925 11746.068 - 11796.480: 95.7209% ( 20) 00:07:52.925 11796.480 - 11846.892: 95.7741% ( 9) 00:07:52.925 11846.892 - 11897.305: 95.8570% ( 14) 00:07:52.925 11897.305 - 11947.717: 95.9399% ( 14) 00:07:52.925 11947.717 - 11998.129: 96.0286% ( 15) 00:07:52.925 11998.129 - 12048.542: 96.1352% ( 18) 00:07:52.925 12048.542 - 12098.954: 96.2299% ( 16) 00:07:52.925 12098.954 - 12149.366: 96.3127% ( 14) 00:07:52.925 12149.366 - 12199.778: 96.3719% ( 10) 00:07:52.925 12199.778 - 12250.191: 96.4607% ( 15) 00:07:52.925 12250.191 - 12300.603: 96.5199% ( 10) 00:07:52.925 12300.603 - 12351.015: 96.5672% ( 8) 00:07:52.925 12351.015 - 12401.428: 96.6027% ( 6) 00:07:52.925 12401.428 - 12451.840: 96.6264% ( 4) 00:07:52.925 12451.840 - 12502.252: 96.6442% ( 3) 00:07:52.925 12502.252 - 12552.665: 96.6679% ( 4) 00:07:52.925 12552.665 - 12603.077: 96.6797% ( 2) 00:07:52.925 12603.077 - 12653.489: 96.6915% ( 2) 00:07:52.925 12653.489 - 12703.902: 96.7034% ( 2) 00:07:52.925 12703.902 - 12754.314: 96.7152% ( 2) 00:07:52.925 12754.314 - 12804.726: 96.7389% ( 4) 00:07:52.925 12804.726 - 12855.138: 96.7625% ( 4) 00:07:52.925 12855.138 - 12905.551: 96.8217% ( 10) 00:07:52.925 12905.551 - 13006.375: 96.8632% ( 7) 00:07:52.925 13006.375 - 13107.200: 96.9105% ( 8) 00:07:52.925 13107.200 - 13208.025: 96.9401% ( 5) 00:07:52.925 13208.025 - 13308.849: 96.9519% ( 2) 00:07:52.925 13308.849 - 13409.674: 96.9579% ( 1) 00:07:52.925 13409.674 - 13510.498: 96.9697% ( 2) 00:07:52.925 13712.148 - 13812.972: 96.9934% ( 4) 00:07:52.925 13812.972 - 13913.797: 97.0348% ( 7) 00:07:52.925 13913.797 - 14014.622: 97.0940% ( 10) 00:07:52.925 14014.622 - 14115.446: 97.1295% ( 6) 00:07:52.925 14115.446 - 14216.271: 97.1887% ( 10) 00:07:52.925 14216.271 - 14317.095: 97.2242% ( 6) 00:07:52.925 14317.095 - 14417.920: 97.2656% ( 7) 00:07:52.925 14417.920 - 14518.745: 97.3130% ( 8) 00:07:52.925 14518.745 - 14619.569: 97.3899% ( 13) 00:07:52.925 14619.569 - 14720.394: 97.4964% ( 18) 00:07:52.925 14720.394 - 14821.218: 97.5852% ( 15) 00:07:52.925 14821.218 - 14922.043: 97.6858% ( 17) 00:07:52.925 14922.043 - 15022.868: 97.7569% ( 12) 00:07:52.925 15022.868 - 15123.692: 97.8812% ( 21) 00:07:52.925 15123.692 - 15224.517: 97.9759% ( 16) 00:07:52.925 15224.517 - 15325.342: 98.0765% ( 17) 00:07:52.925 15325.342 - 15426.166: 98.1889% ( 19) 00:07:52.925 15426.166 - 15526.991: 98.2955% ( 18) 00:07:52.925 15526.991 - 15627.815: 98.4257% ( 22) 00:07:52.925 15627.815 - 15728.640: 98.5381% ( 19) 00:07:52.925 15728.640 - 15829.465: 98.6624% ( 21) 00:07:52.925 15829.465 - 15930.289: 98.7512% ( 15) 00:07:52.925 15930.289 - 16031.114: 98.8340% ( 14) 00:07:52.925 16031.114 - 16131.938: 98.8991% ( 11) 00:07:52.925 16131.938 - 16232.763: 98.9702% ( 12) 00:07:52.925 16232.763 - 16333.588: 99.0412% ( 12) 00:07:52.925 16333.588 - 16434.412: 99.0826% ( 7) 00:07:52.925 16434.412 - 16535.237: 99.1300% ( 8) 00:07:52.925 16535.237 - 16636.062: 99.1714% ( 7) 00:07:52.925 16636.062 - 16736.886: 99.2010% ( 5) 00:07:52.925 16736.886 - 16837.711: 99.2424% ( 7) 00:07:52.925 25105.329 - 25206.154: 99.2720% ( 5) 00:07:52.925 25206.154 - 25306.978: 99.3075% ( 6) 00:07:52.925 25306.978 - 25407.803: 99.3371% ( 5) 00:07:52.925 25407.803 - 25508.628: 99.3726% ( 6) 00:07:52.925 25508.628 - 25609.452: 99.4022% ( 5) 00:07:52.925 25609.452 - 25710.277: 99.4318% ( 5) 00:07:52.925 25710.277 - 25811.102: 99.4614% ( 5) 00:07:52.925 25811.102 - 26012.751: 99.5265% ( 11) 00:07:52.925 26012.751 - 26214.400: 99.5857% ( 10) 00:07:52.925 26214.400 - 26416.049: 99.6212% ( 6) 00:07:52.925 30650.683 - 30852.332: 99.6390% ( 3) 00:07:52.925 30852.332 - 31053.982: 99.7100% ( 12) 00:07:52.925 31053.982 - 31255.631: 99.7751% ( 11) 00:07:52.925 31255.631 - 31457.280: 99.8461% ( 12) 00:07:52.925 31457.280 - 31658.929: 99.9171% ( 12) 00:07:52.925 31658.929 - 31860.578: 99.9763% ( 10) 00:07:52.925 31860.578 - 32062.228: 100.0000% ( 4) 00:07:52.925 00:07:52.925 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:52.925 ============================================================================== 00:07:52.925 Range in us Cumulative IO count 00:07:52.925 4965.612 - 4990.818: 0.0059% ( 1) 00:07:52.925 4990.818 - 5016.025: 0.0355% ( 5) 00:07:52.925 5016.025 - 5041.231: 0.0473% ( 2) 00:07:52.925 5041.231 - 5066.437: 0.0533% ( 1) 00:07:52.925 5066.437 - 5091.643: 0.0651% ( 2) 00:07:52.925 5091.643 - 5116.849: 0.0769% ( 2) 00:07:52.925 5116.849 - 5142.055: 0.0947% ( 3) 00:07:52.925 5142.055 - 5167.262: 0.1065% ( 2) 00:07:52.925 5167.262 - 5192.468: 0.1184% ( 2) 00:07:52.925 5192.468 - 5217.674: 0.1243% ( 1) 00:07:52.925 5217.674 - 5242.880: 0.1420% ( 3) 00:07:52.925 5242.880 - 5268.086: 0.1539% ( 2) 00:07:52.925 5268.086 - 5293.292: 0.1657% ( 2) 00:07:52.925 5293.292 - 5318.498: 0.1776% ( 2) 00:07:52.925 5318.498 - 5343.705: 0.1894% ( 2) 00:07:52.925 5343.705 - 5368.911: 0.2012% ( 2) 00:07:52.925 5368.911 - 5394.117: 0.2131% ( 2) 00:07:52.925 5394.117 - 5419.323: 0.2249% ( 2) 00:07:52.925 5419.323 - 5444.529: 0.2367% ( 2) 00:07:52.925 5444.529 - 5469.735: 0.2486% ( 2) 00:07:52.926 5469.735 - 5494.942: 0.2663% ( 3) 00:07:52.926 5494.942 - 5520.148: 0.2782% ( 2) 00:07:52.926 5520.148 - 5545.354: 0.2900% ( 2) 00:07:52.926 5545.354 - 5570.560: 0.3018% ( 2) 00:07:52.926 5570.560 - 5595.766: 0.3078% ( 1) 00:07:52.926 5595.766 - 5620.972: 0.3196% ( 2) 00:07:52.926 5620.972 - 5646.178: 0.3374% ( 3) 00:07:52.926 5646.178 - 5671.385: 0.3492% ( 2) 00:07:52.926 5671.385 - 5696.591: 0.3610% ( 2) 00:07:52.926 5696.591 - 5721.797: 0.3729% ( 2) 00:07:52.926 5721.797 - 5747.003: 0.4084% ( 6) 00:07:52.926 5747.003 - 5772.209: 0.4498% ( 7) 00:07:52.926 5772.209 - 5797.415: 0.4972% ( 8) 00:07:52.926 5797.415 - 5822.622: 0.5445% ( 8) 00:07:52.926 5822.622 - 5847.828: 0.6214% ( 13) 00:07:52.926 5847.828 - 5873.034: 0.7398% ( 20) 00:07:52.926 5873.034 - 5898.240: 0.9115% ( 29) 00:07:52.926 5898.240 - 5923.446: 1.1009% ( 32) 00:07:52.926 5923.446 - 5948.652: 1.3258% ( 38) 00:07:52.926 5948.652 - 5973.858: 1.5803% ( 43) 00:07:52.926 5973.858 - 5999.065: 1.8407% ( 44) 00:07:52.926 5999.065 - 6024.271: 2.1070% ( 45) 00:07:52.926 6024.271 - 6049.477: 2.3793% ( 46) 00:07:52.926 6049.477 - 6074.683: 2.7580% ( 64) 00:07:52.926 6074.683 - 6099.889: 3.2670% ( 86) 00:07:52.926 6099.889 - 6125.095: 3.8530% ( 99) 00:07:52.926 6125.095 - 6150.302: 4.4567% ( 102) 00:07:52.926 6150.302 - 6175.508: 5.0959% ( 108) 00:07:52.926 6175.508 - 6200.714: 5.7765% ( 115) 00:07:52.926 6200.714 - 6225.920: 6.4986% ( 122) 00:07:52.926 6225.920 - 6251.126: 7.3272% ( 140) 00:07:52.926 6251.126 - 6276.332: 8.2564% ( 157) 00:07:52.926 6276.332 - 6301.538: 9.1560% ( 152) 00:07:52.926 6301.538 - 6326.745: 10.1622% ( 170) 00:07:52.926 6326.745 - 6351.951: 11.2512% ( 184) 00:07:52.926 6351.951 - 6377.157: 12.4527% ( 203) 00:07:52.926 6377.157 - 6402.363: 13.7429% ( 218) 00:07:52.926 6402.363 - 6427.569: 15.1338% ( 235) 00:07:52.926 6427.569 - 6452.775: 16.5187% ( 234) 00:07:52.926 6452.775 - 6503.188: 19.4898% ( 502) 00:07:52.926 6503.188 - 6553.600: 22.7036% ( 543) 00:07:52.926 6553.600 - 6604.012: 26.1837% ( 588) 00:07:52.926 6604.012 - 6654.425: 29.5277% ( 565) 00:07:52.926 6654.425 - 6704.837: 33.0670% ( 598) 00:07:52.926 6704.837 - 6755.249: 36.7602% ( 624) 00:07:52.926 6755.249 - 6805.662: 40.3587% ( 608) 00:07:52.926 6805.662 - 6856.074: 43.8920% ( 597) 00:07:52.926 6856.074 - 6906.486: 47.5320% ( 615) 00:07:52.926 6906.486 - 6956.898: 51.1068% ( 604) 00:07:52.926 6956.898 - 7007.311: 54.6283% ( 595) 00:07:52.926 7007.311 - 7057.723: 57.8835% ( 550) 00:07:52.926 7057.723 - 7108.135: 61.0322% ( 532) 00:07:52.926 7108.135 - 7158.548: 64.0388% ( 508) 00:07:52.926 7158.548 - 7208.960: 66.5838% ( 430) 00:07:52.926 7208.960 - 7259.372: 68.9986% ( 408) 00:07:52.926 7259.372 - 7309.785: 70.9991% ( 338) 00:07:52.926 7309.785 - 7360.197: 72.7865% ( 302) 00:07:52.926 7360.197 - 7410.609: 74.3549% ( 265) 00:07:52.926 7410.609 - 7461.022: 75.6866% ( 225) 00:07:52.926 7461.022 - 7511.434: 76.8643% ( 199) 00:07:52.926 7511.434 - 7561.846: 77.9652% ( 186) 00:07:52.926 7561.846 - 7612.258: 78.8944% ( 157) 00:07:52.926 7612.258 - 7662.671: 79.7585% ( 146) 00:07:52.926 7662.671 - 7713.083: 80.5161% ( 128) 00:07:52.926 7713.083 - 7763.495: 81.1435% ( 106) 00:07:52.926 7763.495 - 7813.908: 81.5874% ( 75) 00:07:52.926 7813.908 - 7864.320: 82.0727% ( 82) 00:07:52.926 7864.320 - 7914.732: 82.4870% ( 70) 00:07:52.926 7914.732 - 7965.145: 82.8894% ( 68) 00:07:52.926 7965.145 - 8015.557: 83.2682% ( 64) 00:07:52.926 8015.557 - 8065.969: 83.6174% ( 59) 00:07:52.926 8065.969 - 8116.382: 83.9370% ( 54) 00:07:52.926 8116.382 - 8166.794: 84.2981% ( 61) 00:07:52.926 8166.794 - 8217.206: 84.6295% ( 56) 00:07:52.926 8217.206 - 8267.618: 84.9846% ( 60) 00:07:52.926 8267.618 - 8318.031: 85.3752% ( 66) 00:07:52.926 8318.031 - 8368.443: 85.6652% ( 49) 00:07:52.926 8368.443 - 8418.855: 85.9730% ( 52) 00:07:52.926 8418.855 - 8469.268: 86.2275% ( 43) 00:07:52.926 8469.268 - 8519.680: 86.5530% ( 55) 00:07:52.926 8519.680 - 8570.092: 86.8726% ( 54) 00:07:52.926 8570.092 - 8620.505: 87.1271% ( 43) 00:07:52.926 8620.505 - 8670.917: 87.4112% ( 48) 00:07:52.926 8670.917 - 8721.329: 87.7071% ( 50) 00:07:52.926 8721.329 - 8771.742: 87.9853% ( 47) 00:07:52.926 8771.742 - 8822.154: 88.2339% ( 42) 00:07:52.926 8822.154 - 8872.566: 88.5062% ( 46) 00:07:52.926 8872.566 - 8922.978: 88.7784% ( 46) 00:07:52.926 8922.978 - 8973.391: 89.0447% ( 45) 00:07:52.926 8973.391 - 9023.803: 89.3229% ( 47) 00:07:52.926 9023.803 - 9074.215: 89.6129% ( 49) 00:07:52.926 9074.215 - 9124.628: 89.8615% ( 42) 00:07:52.926 9124.628 - 9175.040: 90.0746% ( 36) 00:07:52.926 9175.040 - 9225.452: 90.2285% ( 26) 00:07:52.926 9225.452 - 9275.865: 90.3705% ( 24) 00:07:52.926 9275.865 - 9326.277: 90.5185% ( 25) 00:07:52.926 9326.277 - 9376.689: 90.6309% ( 19) 00:07:52.926 9376.689 - 9427.102: 90.7256% ( 16) 00:07:52.926 9427.102 - 9477.514: 90.8262% ( 17) 00:07:52.926 9477.514 - 9527.926: 90.9091% ( 14) 00:07:52.926 9527.926 - 9578.338: 91.0334% ( 21) 00:07:52.926 9578.338 - 9628.751: 91.1162% ( 14) 00:07:52.926 9628.751 - 9679.163: 91.2287% ( 19) 00:07:52.926 9679.163 - 9729.575: 91.3293% ( 17) 00:07:52.926 9729.575 - 9779.988: 91.4358% ( 18) 00:07:52.926 9779.988 - 9830.400: 91.5720% ( 23) 00:07:52.926 9830.400 - 9880.812: 91.6844% ( 19) 00:07:52.926 9880.812 - 9931.225: 91.7910% ( 18) 00:07:52.926 9931.225 - 9981.637: 91.9330% ( 24) 00:07:52.926 9981.637 - 10032.049: 92.0632% ( 22) 00:07:52.926 10032.049 - 10082.462: 92.1579% ( 16) 00:07:52.926 10082.462 - 10132.874: 92.2881% ( 22) 00:07:52.926 10132.874 - 10183.286: 92.4006% ( 19) 00:07:52.926 10183.286 - 10233.698: 92.5071% ( 18) 00:07:52.926 10233.698 - 10284.111: 92.6136% ( 18) 00:07:52.926 10284.111 - 10334.523: 92.7557% ( 24) 00:07:52.926 10334.523 - 10384.935: 92.8859% ( 22) 00:07:52.926 10384.935 - 10435.348: 93.0102% ( 21) 00:07:52.926 10435.348 - 10485.760: 93.1345% ( 21) 00:07:52.926 10485.760 - 10536.172: 93.2588% ( 21) 00:07:52.926 10536.172 - 10586.585: 93.4126% ( 26) 00:07:52.926 10586.585 - 10636.997: 93.5192% ( 18) 00:07:52.926 10636.997 - 10687.409: 93.6257% ( 18) 00:07:52.926 10687.409 - 10737.822: 93.7145% ( 15) 00:07:52.926 10737.822 - 10788.234: 93.8092% ( 16) 00:07:52.926 10788.234 - 10838.646: 93.9098% ( 17) 00:07:52.926 10838.646 - 10889.058: 93.9749% ( 11) 00:07:52.926 10889.058 - 10939.471: 94.0459% ( 12) 00:07:52.926 10939.471 - 10989.883: 94.1110% ( 11) 00:07:52.926 10989.883 - 11040.295: 94.1880% ( 13) 00:07:52.926 11040.295 - 11090.708: 94.2649% ( 13) 00:07:52.926 11090.708 - 11141.120: 94.3833% ( 20) 00:07:52.926 11141.120 - 11191.532: 94.4661% ( 14) 00:07:52.926 11191.532 - 11241.945: 94.5549% ( 15) 00:07:52.926 11241.945 - 11292.357: 94.6378% ( 14) 00:07:52.926 11292.357 - 11342.769: 94.7443% ( 18) 00:07:52.926 11342.769 - 11393.182: 94.8449% ( 17) 00:07:52.926 11393.182 - 11443.594: 94.9219% ( 13) 00:07:52.926 11443.594 - 11494.006: 95.0166% ( 16) 00:07:52.926 11494.006 - 11544.418: 95.0758% ( 10) 00:07:52.926 11544.418 - 11594.831: 95.1409% ( 11) 00:07:52.926 11594.831 - 11645.243: 95.2119% ( 12) 00:07:52.926 11645.243 - 11695.655: 95.2888% ( 13) 00:07:52.926 11695.655 - 11746.068: 95.3598% ( 12) 00:07:52.927 11746.068 - 11796.480: 95.4664% ( 18) 00:07:52.927 11796.480 - 11846.892: 95.5492% ( 14) 00:07:52.927 11846.892 - 11897.305: 95.6439% ( 16) 00:07:52.927 11897.305 - 11947.717: 95.7327% ( 15) 00:07:52.927 11947.717 - 11998.129: 95.8097% ( 13) 00:07:52.927 11998.129 - 12048.542: 95.9162% ( 18) 00:07:52.927 12048.542 - 12098.954: 96.0168% ( 17) 00:07:52.927 12098.954 - 12149.366: 96.1233% ( 18) 00:07:52.927 12149.366 - 12199.778: 96.2358% ( 19) 00:07:52.927 12199.778 - 12250.191: 96.3246% ( 15) 00:07:52.927 12250.191 - 12300.603: 96.3542% ( 5) 00:07:52.927 12300.603 - 12351.015: 96.4252% ( 12) 00:07:52.927 12351.015 - 12401.428: 96.4903% ( 11) 00:07:52.927 12401.428 - 12451.840: 96.5495% ( 10) 00:07:52.927 12451.840 - 12502.252: 96.6442% ( 16) 00:07:52.927 12502.252 - 12552.665: 96.6619% ( 3) 00:07:52.927 12552.665 - 12603.077: 96.7211% ( 10) 00:07:52.927 12603.077 - 12653.489: 96.7566% ( 6) 00:07:52.927 12653.489 - 12703.902: 96.8336% ( 13) 00:07:52.927 12703.902 - 12754.314: 96.8750% ( 7) 00:07:52.927 12754.314 - 12804.726: 96.9223% ( 8) 00:07:52.927 12804.726 - 12855.138: 96.9401% ( 3) 00:07:52.927 12855.138 - 12905.551: 96.9519% ( 2) 00:07:52.927 12905.551 - 13006.375: 96.9697% ( 3) 00:07:52.927 13006.375 - 13107.200: 96.9815% ( 2) 00:07:52.927 13107.200 - 13208.025: 97.0052% ( 4) 00:07:52.927 13208.025 - 13308.849: 97.0348% ( 5) 00:07:52.927 13308.849 - 13409.674: 97.0585% ( 4) 00:07:52.927 13409.674 - 13510.498: 97.0821% ( 4) 00:07:52.927 13510.498 - 13611.323: 97.1058% ( 4) 00:07:52.927 13611.323 - 13712.148: 97.1295% ( 4) 00:07:52.927 13712.148 - 13812.972: 97.1532% ( 4) 00:07:52.927 13812.972 - 13913.797: 97.1709% ( 3) 00:07:52.927 13913.797 - 14014.622: 97.1887% ( 3) 00:07:52.927 14014.622 - 14115.446: 97.2124% ( 4) 00:07:52.927 14115.446 - 14216.271: 97.2715% ( 10) 00:07:52.927 14216.271 - 14317.095: 97.3426% ( 12) 00:07:52.927 14317.095 - 14417.920: 97.4373% ( 16) 00:07:52.927 14417.920 - 14518.745: 97.5260% ( 15) 00:07:52.927 14518.745 - 14619.569: 97.6089% ( 14) 00:07:52.927 14619.569 - 14720.394: 97.6918% ( 14) 00:07:52.927 14720.394 - 14821.218: 97.7746% ( 14) 00:07:52.927 14821.218 - 14922.043: 97.8397% ( 11) 00:07:52.927 14922.043 - 15022.868: 97.9226% ( 14) 00:07:52.927 15022.868 - 15123.692: 98.0114% ( 15) 00:07:52.927 15123.692 - 15224.517: 98.0942% ( 14) 00:07:52.927 15224.517 - 15325.342: 98.1593% ( 11) 00:07:52.927 15325.342 - 15426.166: 98.2126% ( 9) 00:07:52.927 15426.166 - 15526.991: 98.2481% ( 6) 00:07:52.927 15526.991 - 15627.815: 98.2955% ( 8) 00:07:52.927 15627.815 - 15728.640: 98.3310% ( 6) 00:07:52.927 15728.640 - 15829.465: 98.3665% ( 6) 00:07:52.927 15829.465 - 15930.289: 98.4079% ( 7) 00:07:52.927 15930.289 - 16031.114: 98.4434% ( 6) 00:07:52.927 16031.114 - 16131.938: 98.4967% ( 9) 00:07:52.927 16131.938 - 16232.763: 98.5736% ( 13) 00:07:52.927 16232.763 - 16333.588: 98.6506% ( 13) 00:07:52.927 16333.588 - 16434.412: 98.7334% ( 14) 00:07:52.927 16434.412 - 16535.237: 98.8281% ( 16) 00:07:52.927 16535.237 - 16636.062: 98.9228% ( 16) 00:07:52.927 16636.062 - 16736.886: 99.0116% ( 15) 00:07:52.927 16736.886 - 16837.711: 99.1004% ( 15) 00:07:52.927 16837.711 - 16938.535: 99.1536% ( 9) 00:07:52.927 16938.535 - 17039.360: 99.1951% ( 7) 00:07:52.927 17039.360 - 17140.185: 99.2365% ( 7) 00:07:52.927 17140.185 - 17241.009: 99.2424% ( 1) 00:07:52.927 25105.329 - 25206.154: 99.2661% ( 4) 00:07:52.927 25206.154 - 25306.978: 99.3016% ( 6) 00:07:52.927 25306.978 - 25407.803: 99.3430% ( 7) 00:07:52.927 25407.803 - 25508.628: 99.3786% ( 6) 00:07:52.927 25508.628 - 25609.452: 99.4081% ( 5) 00:07:52.927 25609.452 - 25710.277: 99.4437% ( 6) 00:07:52.927 25710.277 - 25811.102: 99.4792% ( 6) 00:07:52.927 25811.102 - 26012.751: 99.5502% ( 12) 00:07:52.927 26012.751 - 26214.400: 99.6212% ( 12) 00:07:52.927 30247.385 - 30449.034: 99.6863% ( 11) 00:07:52.927 30449.034 - 30650.683: 99.7573% ( 12) 00:07:52.927 30650.683 - 30852.332: 99.8284% ( 12) 00:07:52.927 30852.332 - 31053.982: 99.9112% ( 14) 00:07:52.927 31053.982 - 31255.631: 99.9882% ( 13) 00:07:52.927 31255.631 - 31457.280: 100.0000% ( 2) 00:07:52.927 00:07:52.927 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:52.927 ============================================================================== 00:07:52.927 Range in us Cumulative IO count 00:07:52.927 4184.222 - 4209.428: 0.0118% ( 2) 00:07:52.927 4209.428 - 4234.634: 0.0237% ( 2) 00:07:52.927 4234.634 - 4259.840: 0.0355% ( 2) 00:07:52.927 4259.840 - 4285.046: 0.0473% ( 2) 00:07:52.927 4285.046 - 4310.252: 0.0651% ( 3) 00:07:52.927 4310.252 - 4335.458: 0.0769% ( 2) 00:07:52.927 4335.458 - 4360.665: 0.0888% ( 2) 00:07:52.927 4360.665 - 4385.871: 0.1006% ( 2) 00:07:52.927 4385.871 - 4411.077: 0.1184% ( 3) 00:07:52.927 4411.077 - 4436.283: 0.1302% ( 2) 00:07:52.927 4436.283 - 4461.489: 0.1420% ( 2) 00:07:52.927 4461.489 - 4486.695: 0.1539% ( 2) 00:07:52.927 4486.695 - 4511.902: 0.1716% ( 3) 00:07:52.927 4511.902 - 4537.108: 0.1835% ( 2) 00:07:52.927 4537.108 - 4562.314: 0.1953% ( 2) 00:07:52.927 4562.314 - 4587.520: 0.2012% ( 1) 00:07:52.927 4587.520 - 4612.726: 0.2131% ( 2) 00:07:52.927 4612.726 - 4637.932: 0.2249% ( 2) 00:07:52.927 4637.932 - 4663.138: 0.2367% ( 2) 00:07:52.927 4663.138 - 4688.345: 0.2486% ( 2) 00:07:52.927 4688.345 - 4713.551: 0.2604% ( 2) 00:07:52.927 4713.551 - 4738.757: 0.2723% ( 2) 00:07:52.927 4738.757 - 4763.963: 0.2782% ( 1) 00:07:52.927 4763.963 - 4789.169: 0.2841% ( 1) 00:07:52.927 4789.169 - 4814.375: 0.2900% ( 1) 00:07:52.927 4814.375 - 4839.582: 0.2959% ( 1) 00:07:52.927 4839.582 - 4864.788: 0.3018% ( 1) 00:07:52.927 4864.788 - 4889.994: 0.3078% ( 1) 00:07:52.927 4889.994 - 4915.200: 0.3137% ( 1) 00:07:52.927 4915.200 - 4940.406: 0.3196% ( 1) 00:07:52.927 4940.406 - 4965.612: 0.3255% ( 1) 00:07:52.927 4990.818 - 5016.025: 0.3314% ( 1) 00:07:52.927 5016.025 - 5041.231: 0.3374% ( 1) 00:07:52.927 5041.231 - 5066.437: 0.3433% ( 1) 00:07:52.927 5091.643 - 5116.849: 0.3492% ( 1) 00:07:52.927 5116.849 - 5142.055: 0.3551% ( 1) 00:07:52.927 5142.055 - 5167.262: 0.3610% ( 1) 00:07:52.927 5167.262 - 5192.468: 0.3670% ( 1) 00:07:52.927 5192.468 - 5217.674: 0.3729% ( 1) 00:07:52.927 5293.292 - 5318.498: 0.3788% ( 1) 00:07:52.927 5721.797 - 5747.003: 0.3906% ( 2) 00:07:52.927 5747.003 - 5772.209: 0.4202% ( 5) 00:07:52.927 5772.209 - 5797.415: 0.5031% ( 14) 00:07:52.927 5797.415 - 5822.622: 0.5741% ( 12) 00:07:52.927 5822.622 - 5847.828: 0.6510% ( 13) 00:07:52.927 5847.828 - 5873.034: 0.8227% ( 29) 00:07:52.927 5873.034 - 5898.240: 1.0062% ( 31) 00:07:52.927 5898.240 - 5923.446: 1.2133% ( 35) 00:07:52.927 5923.446 - 5948.652: 1.5033% ( 49) 00:07:52.928 5948.652 - 5973.858: 1.7341% ( 39) 00:07:52.928 5973.858 - 5999.065: 2.0241% ( 49) 00:07:52.928 5999.065 - 6024.271: 2.3082% ( 48) 00:07:52.928 6024.271 - 6049.477: 2.5568% ( 42) 00:07:52.928 6049.477 - 6074.683: 2.9474% ( 66) 00:07:52.928 6074.683 - 6099.889: 3.3736% ( 72) 00:07:52.928 6099.889 - 6125.095: 3.8411% ( 79) 00:07:52.928 6125.095 - 6150.302: 4.4448% ( 102) 00:07:52.928 6150.302 - 6175.508: 5.0663% ( 105) 00:07:52.928 6175.508 - 6200.714: 5.7824% ( 121) 00:07:52.928 6200.714 - 6225.920: 6.5696% ( 133) 00:07:52.928 6225.920 - 6251.126: 7.4929% ( 156) 00:07:52.928 6251.126 - 6276.332: 8.4576% ( 163) 00:07:52.928 6276.332 - 6301.538: 9.4934% ( 175) 00:07:52.928 6301.538 - 6326.745: 10.6534% ( 196) 00:07:52.928 6326.745 - 6351.951: 11.7779% ( 190) 00:07:52.928 6351.951 - 6377.157: 13.0208% ( 210) 00:07:52.928 6377.157 - 6402.363: 14.3111% ( 218) 00:07:52.928 6402.363 - 6427.569: 15.6250% ( 222) 00:07:52.928 6427.569 - 6452.775: 17.0810% ( 246) 00:07:52.928 6452.775 - 6503.188: 20.1764% ( 523) 00:07:52.928 6503.188 - 6553.600: 23.5204% ( 565) 00:07:52.928 6553.600 - 6604.012: 27.0123% ( 590) 00:07:52.928 6604.012 - 6654.425: 30.4924% ( 588) 00:07:52.928 6654.425 - 6704.837: 34.0732% ( 605) 00:07:52.928 6704.837 - 6755.249: 37.6598% ( 606) 00:07:52.928 6755.249 - 6805.662: 41.2760% ( 611) 00:07:52.928 6805.662 - 6856.074: 44.8331% ( 601) 00:07:52.928 6856.074 - 6906.486: 48.3842% ( 600) 00:07:52.928 6906.486 - 6956.898: 51.8111% ( 579) 00:07:52.928 6956.898 - 7007.311: 55.2320% ( 578) 00:07:52.928 7007.311 - 7057.723: 58.4162% ( 538) 00:07:52.928 7057.723 - 7108.135: 61.5116% ( 523) 00:07:52.928 7108.135 - 7158.548: 64.2992% ( 471) 00:07:52.928 7158.548 - 7208.960: 66.8324% ( 428) 00:07:52.928 7208.960 - 7259.372: 69.1229% ( 387) 00:07:52.928 7259.372 - 7309.785: 71.1529% ( 343) 00:07:52.928 7309.785 - 7360.197: 72.9581% ( 305) 00:07:52.928 7360.197 - 7410.609: 74.5502% ( 269) 00:07:52.928 7410.609 - 7461.022: 75.9825% ( 242) 00:07:52.928 7461.022 - 7511.434: 77.2550% ( 215) 00:07:52.928 7511.434 - 7561.846: 78.2848% ( 174) 00:07:52.928 7561.846 - 7612.258: 79.1193% ( 141) 00:07:52.928 7612.258 - 7662.671: 79.8532% ( 124) 00:07:52.928 7662.671 - 7713.083: 80.4451% ( 100) 00:07:52.928 7713.083 - 7763.495: 80.9541% ( 86) 00:07:52.928 7763.495 - 7813.908: 81.4453% ( 83) 00:07:52.928 7813.908 - 7864.320: 81.9188% ( 80) 00:07:52.928 7864.320 - 7914.732: 82.3509% ( 73) 00:07:52.928 7914.732 - 7965.145: 82.7770% ( 72) 00:07:52.928 7965.145 - 8015.557: 83.1972% ( 71) 00:07:52.928 8015.557 - 8065.969: 83.6707% ( 80) 00:07:52.928 8065.969 - 8116.382: 84.0495% ( 64) 00:07:52.928 8116.382 - 8166.794: 84.4223% ( 63) 00:07:52.928 8166.794 - 8217.206: 84.7538% ( 56) 00:07:52.928 8217.206 - 8267.618: 85.0320% ( 47) 00:07:52.928 8267.618 - 8318.031: 85.3397% ( 52) 00:07:52.928 8318.031 - 8368.443: 85.6061% ( 45) 00:07:52.928 8368.443 - 8418.855: 85.9316% ( 55) 00:07:52.928 8418.855 - 8469.268: 86.3104% ( 64) 00:07:52.928 8469.268 - 8519.680: 86.6596% ( 59) 00:07:52.928 8519.680 - 8570.092: 86.9969% ( 57) 00:07:52.928 8570.092 - 8620.505: 87.3461% ( 59) 00:07:52.928 8620.505 - 8670.917: 87.7071% ( 61) 00:07:52.928 8670.917 - 8721.329: 87.9853% ( 47) 00:07:52.928 8721.329 - 8771.742: 88.2280% ( 41) 00:07:52.928 8771.742 - 8822.154: 88.4411% ( 36) 00:07:52.928 8822.154 - 8872.566: 88.6304% ( 32) 00:07:52.928 8872.566 - 8922.978: 88.8139% ( 31) 00:07:52.928 8922.978 - 8973.391: 89.0033% ( 32) 00:07:52.928 8973.391 - 9023.803: 89.2045% ( 34) 00:07:52.928 9023.803 - 9074.215: 89.4117% ( 35) 00:07:52.928 9074.215 - 9124.628: 89.6248% ( 36) 00:07:52.928 9124.628 - 9175.040: 89.8378% ( 36) 00:07:52.928 9175.040 - 9225.452: 90.0568% ( 37) 00:07:52.928 9225.452 - 9275.865: 90.2521% ( 33) 00:07:52.928 9275.865 - 9326.277: 90.4179% ( 28) 00:07:52.928 9326.277 - 9376.689: 90.5421% ( 21) 00:07:52.928 9376.689 - 9427.102: 90.6546% ( 19) 00:07:52.928 9427.102 - 9477.514: 90.8321% ( 30) 00:07:52.928 9477.514 - 9527.926: 90.9742% ( 24) 00:07:52.928 9527.926 - 9578.338: 91.1162% ( 24) 00:07:52.928 9578.338 - 9628.751: 91.2524% ( 23) 00:07:52.928 9628.751 - 9679.163: 91.3826% ( 22) 00:07:52.928 9679.163 - 9729.575: 91.5187% ( 23) 00:07:52.928 9729.575 - 9779.988: 91.6548% ( 23) 00:07:52.928 9779.988 - 9830.400: 91.7495% ( 16) 00:07:52.928 9830.400 - 9880.812: 91.8679% ( 20) 00:07:52.928 9880.812 - 9931.225: 92.0159% ( 25) 00:07:52.928 9931.225 - 9981.637: 92.1165% ( 17) 00:07:52.928 9981.637 - 10032.049: 92.2348% ( 20) 00:07:52.928 10032.049 - 10082.462: 92.3295% ( 16) 00:07:52.928 10082.462 - 10132.874: 92.3887% ( 10) 00:07:52.928 10132.874 - 10183.286: 92.4361% ( 8) 00:07:52.928 10183.286 - 10233.698: 92.5249% ( 15) 00:07:52.928 10233.698 - 10284.111: 92.5959% ( 12) 00:07:52.928 10284.111 - 10334.523: 92.6906% ( 16) 00:07:52.928 10334.523 - 10384.935: 92.8030% ( 19) 00:07:52.928 10384.935 - 10435.348: 92.9096% ( 18) 00:07:52.928 10435.348 - 10485.760: 93.0279% ( 20) 00:07:52.928 10485.760 - 10536.172: 93.1404% ( 19) 00:07:52.928 10536.172 - 10586.585: 93.2765% ( 23) 00:07:52.928 10586.585 - 10636.997: 93.3949% ( 20) 00:07:52.928 10636.997 - 10687.409: 93.5133% ( 20) 00:07:52.928 10687.409 - 10737.822: 93.6257% ( 19) 00:07:52.928 10737.822 - 10788.234: 93.7027% ( 13) 00:07:52.928 10788.234 - 10838.646: 93.7559% ( 9) 00:07:52.928 10838.646 - 10889.058: 93.8210% ( 11) 00:07:52.928 10889.058 - 10939.471: 93.8861% ( 11) 00:07:52.928 10939.471 - 10989.883: 93.9749% ( 15) 00:07:52.928 10989.883 - 11040.295: 94.0637% ( 15) 00:07:52.928 11040.295 - 11090.708: 94.1880% ( 21) 00:07:52.928 11090.708 - 11141.120: 94.3123% ( 21) 00:07:52.928 11141.120 - 11191.532: 94.4425% ( 22) 00:07:52.928 11191.532 - 11241.945: 94.5017% ( 10) 00:07:52.928 11241.945 - 11292.357: 94.5490% ( 8) 00:07:52.928 11292.357 - 11342.769: 94.5845% ( 6) 00:07:52.928 11342.769 - 11393.182: 94.6200% ( 6) 00:07:52.928 11393.182 - 11443.594: 94.6555% ( 6) 00:07:52.928 11443.594 - 11494.006: 94.6911% ( 6) 00:07:52.928 11494.006 - 11544.418: 94.7206% ( 5) 00:07:52.928 11544.418 - 11594.831: 94.7562% ( 6) 00:07:52.928 11594.831 - 11645.243: 94.7917% ( 6) 00:07:52.928 11645.243 - 11695.655: 94.8331% ( 7) 00:07:52.928 11695.655 - 11746.068: 94.8923% ( 10) 00:07:52.928 11746.068 - 11796.480: 94.9751% ( 14) 00:07:52.928 11796.480 - 11846.892: 95.0462% ( 12) 00:07:52.928 11846.892 - 11897.305: 95.1586% ( 19) 00:07:52.928 11897.305 - 11947.717: 95.2652% ( 18) 00:07:52.928 11947.717 - 11998.129: 95.3303% ( 11) 00:07:52.928 11998.129 - 12048.542: 95.3776% ( 8) 00:07:52.928 12048.542 - 12098.954: 95.4427% ( 11) 00:07:52.928 12098.954 - 12149.366: 95.5196% ( 13) 00:07:52.928 12149.366 - 12199.778: 95.5966% ( 13) 00:07:52.928 12199.778 - 12250.191: 95.6854% ( 15) 00:07:52.928 12250.191 - 12300.603: 95.7741% ( 15) 00:07:52.928 12300.603 - 12351.015: 95.8748% ( 17) 00:07:52.928 12351.015 - 12401.428: 96.0109% ( 23) 00:07:52.928 12401.428 - 12451.840: 96.0760% ( 11) 00:07:52.928 12451.840 - 12502.252: 96.1352% ( 10) 00:07:52.928 12502.252 - 12552.665: 96.1648% ( 5) 00:07:52.928 12552.665 - 12603.077: 96.2121% ( 8) 00:07:52.928 12603.077 - 12653.489: 96.2595% ( 8) 00:07:52.928 12653.489 - 12703.902: 96.3009% ( 7) 00:07:52.928 12703.902 - 12754.314: 96.3482% ( 8) 00:07:52.928 12754.314 - 12804.726: 96.3838% ( 6) 00:07:52.928 12804.726 - 12855.138: 96.4548% ( 12) 00:07:52.928 12855.138 - 12905.551: 96.5199% ( 11) 00:07:52.928 12905.551 - 13006.375: 96.7448% ( 38) 00:07:52.928 13006.375 - 13107.200: 96.8987% ( 26) 00:07:52.928 13107.200 - 13208.025: 97.0170% ( 20) 00:07:52.928 13208.025 - 13308.849: 97.0821% ( 11) 00:07:52.928 13308.849 - 13409.674: 97.1650% ( 14) 00:07:52.928 13409.674 - 13510.498: 97.2775% ( 19) 00:07:52.928 13510.498 - 13611.323: 97.3426% ( 11) 00:07:52.928 13611.323 - 13712.148: 97.4195% ( 13) 00:07:52.928 13712.148 - 13812.972: 97.4669% ( 8) 00:07:52.929 13812.972 - 13913.797: 97.5201% ( 9) 00:07:52.929 13913.797 - 14014.622: 97.5675% ( 8) 00:07:52.929 14014.622 - 14115.446: 97.6148% ( 8) 00:07:52.929 14115.446 - 14216.271: 97.6681% ( 9) 00:07:52.929 14216.271 - 14317.095: 97.6977% ( 5) 00:07:52.929 14317.095 - 14417.920: 97.7273% ( 5) 00:07:52.929 14417.920 - 14518.745: 97.7865% ( 10) 00:07:52.929 14518.745 - 14619.569: 97.8279% ( 7) 00:07:52.929 14619.569 - 14720.394: 97.8752% ( 8) 00:07:52.929 14720.394 - 14821.218: 97.9344% ( 10) 00:07:52.929 14821.218 - 14922.043: 97.9699% ( 6) 00:07:52.929 14922.043 - 15022.868: 98.0114% ( 7) 00:07:52.929 15022.868 - 15123.692: 98.0587% ( 8) 00:07:52.929 15123.692 - 15224.517: 98.1179% ( 10) 00:07:52.929 15224.517 - 15325.342: 98.1889% ( 12) 00:07:52.929 15325.342 - 15426.166: 98.2422% ( 9) 00:07:52.929 15426.166 - 15526.991: 98.3014% ( 10) 00:07:52.929 15526.991 - 15627.815: 98.3665% ( 11) 00:07:52.929 15627.815 - 15728.640: 98.4197% ( 9) 00:07:52.929 15728.640 - 15829.465: 98.4908% ( 12) 00:07:52.929 15829.465 - 15930.289: 98.5440% ( 9) 00:07:52.929 15930.289 - 16031.114: 98.5973% ( 9) 00:07:52.929 16031.114 - 16131.938: 98.6742% ( 13) 00:07:52.929 16131.938 - 16232.763: 98.7630% ( 15) 00:07:52.929 16232.763 - 16333.588: 98.8163% ( 9) 00:07:52.929 16333.588 - 16434.412: 98.8696% ( 9) 00:07:52.929 16434.412 - 16535.237: 98.9287% ( 10) 00:07:52.929 16535.237 - 16636.062: 98.9820% ( 9) 00:07:52.929 16636.062 - 16736.886: 99.0412% ( 10) 00:07:52.929 16736.886 - 16837.711: 99.0945% ( 9) 00:07:52.929 16837.711 - 16938.535: 99.1418% ( 8) 00:07:52.929 16938.535 - 17039.360: 99.1773% ( 6) 00:07:52.929 17039.360 - 17140.185: 99.2128% ( 6) 00:07:52.929 17140.185 - 17241.009: 99.2424% ( 5) 00:07:52.929 25710.277 - 25811.102: 99.2483% ( 1) 00:07:52.929 25811.102 - 26012.751: 99.3312% ( 14) 00:07:52.929 26012.751 - 26214.400: 99.4081% ( 13) 00:07:52.929 26214.400 - 26416.049: 99.4792% ( 12) 00:07:52.929 26416.049 - 26617.698: 99.5561% ( 13) 00:07:52.929 26617.698 - 26819.348: 99.6212% ( 11) 00:07:52.929 30045.735 - 30247.385: 99.6567% ( 6) 00:07:52.929 30247.385 - 30449.034: 99.7277% ( 12) 00:07:52.929 30449.034 - 30650.683: 99.7869% ( 10) 00:07:52.929 30650.683 - 30852.332: 99.8580% ( 12) 00:07:52.929 30852.332 - 31053.982: 99.9290% ( 12) 00:07:52.929 31053.982 - 31255.631: 99.9941% ( 11) 00:07:52.929 31255.631 - 31457.280: 100.0000% ( 1) 00:07:52.929 00:07:52.929 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:52.929 ============================================================================== 00:07:52.929 Range in us Cumulative IO count 00:07:52.929 4007.778 - 4032.985: 0.0059% ( 1) 00:07:52.929 4032.985 - 4058.191: 0.0296% ( 4) 00:07:52.929 4058.191 - 4083.397: 0.0355% ( 1) 00:07:52.929 4083.397 - 4108.603: 0.0473% ( 2) 00:07:52.929 4108.603 - 4133.809: 0.0533% ( 1) 00:07:52.929 4133.809 - 4159.015: 0.0651% ( 2) 00:07:52.929 4159.015 - 4184.222: 0.0710% ( 1) 00:07:52.929 4184.222 - 4209.428: 0.0829% ( 2) 00:07:52.929 4209.428 - 4234.634: 0.0947% ( 2) 00:07:52.929 4234.634 - 4259.840: 0.1065% ( 2) 00:07:52.929 4259.840 - 4285.046: 0.1302% ( 4) 00:07:52.929 4285.046 - 4310.252: 0.1420% ( 2) 00:07:52.929 4310.252 - 4335.458: 0.1539% ( 2) 00:07:52.929 4335.458 - 4360.665: 0.1657% ( 2) 00:07:52.929 4360.665 - 4385.871: 0.1716% ( 1) 00:07:52.929 4385.871 - 4411.077: 0.1835% ( 2) 00:07:52.929 4411.077 - 4436.283: 0.1953% ( 2) 00:07:52.929 4436.283 - 4461.489: 0.2071% ( 2) 00:07:52.929 4461.489 - 4486.695: 0.2190% ( 2) 00:07:52.929 4486.695 - 4511.902: 0.2308% ( 2) 00:07:52.929 4511.902 - 4537.108: 0.2427% ( 2) 00:07:52.929 4537.108 - 4562.314: 0.2545% ( 2) 00:07:52.929 4562.314 - 4587.520: 0.2663% ( 2) 00:07:52.929 4587.520 - 4612.726: 0.2782% ( 2) 00:07:52.929 4612.726 - 4637.932: 0.2900% ( 2) 00:07:52.929 4637.932 - 4663.138: 0.2959% ( 1) 00:07:52.929 4663.138 - 4688.345: 0.3078% ( 2) 00:07:52.929 4688.345 - 4713.551: 0.3196% ( 2) 00:07:52.929 4713.551 - 4738.757: 0.3314% ( 2) 00:07:52.929 4738.757 - 4763.963: 0.3433% ( 2) 00:07:52.929 4763.963 - 4789.169: 0.3551% ( 2) 00:07:52.929 4789.169 - 4814.375: 0.3670% ( 2) 00:07:52.929 4814.375 - 4839.582: 0.3788% ( 2) 00:07:52.929 5671.385 - 5696.591: 0.4084% ( 5) 00:07:52.929 5696.591 - 5721.797: 0.4202% ( 2) 00:07:52.929 5721.797 - 5747.003: 0.4321% ( 2) 00:07:52.929 5747.003 - 5772.209: 0.4794% ( 8) 00:07:52.929 5772.209 - 5797.415: 0.5268% ( 8) 00:07:52.929 5797.415 - 5822.622: 0.6274% ( 17) 00:07:52.929 5822.622 - 5847.828: 0.7221% ( 16) 00:07:52.929 5847.828 - 5873.034: 0.8108% ( 15) 00:07:52.929 5873.034 - 5898.240: 0.9706% ( 27) 00:07:52.929 5898.240 - 5923.446: 1.1245% ( 26) 00:07:52.929 5923.446 - 5948.652: 1.3258% ( 34) 00:07:52.929 5948.652 - 5973.858: 1.5033% ( 30) 00:07:52.929 5973.858 - 5999.065: 1.7341% ( 39) 00:07:52.929 5999.065 - 6024.271: 1.9709% ( 40) 00:07:52.929 6024.271 - 6049.477: 2.3319% ( 61) 00:07:52.929 6049.477 - 6074.683: 2.7640% ( 73) 00:07:52.929 6074.683 - 6099.889: 3.2019% ( 74) 00:07:52.929 6099.889 - 6125.095: 3.7228% ( 88) 00:07:52.929 6125.095 - 6150.302: 4.2199% ( 84) 00:07:52.929 6150.302 - 6175.508: 4.8473% ( 106) 00:07:52.929 6175.508 - 6200.714: 5.5161% ( 113) 00:07:52.929 6200.714 - 6225.920: 6.1553% ( 108) 00:07:52.929 6225.920 - 6251.126: 7.0312% ( 148) 00:07:52.929 6251.126 - 6276.332: 7.9723% ( 159) 00:07:52.929 6276.332 - 6301.538: 8.9311% ( 162) 00:07:52.929 6301.538 - 6326.745: 10.0616% ( 191) 00:07:52.929 6326.745 - 6351.951: 11.1565% ( 185) 00:07:52.929 6351.951 - 6377.157: 12.2218% ( 180) 00:07:52.929 6377.157 - 6402.363: 13.4588% ( 209) 00:07:52.929 6402.363 - 6427.569: 14.7372% ( 216) 00:07:52.929 6427.569 - 6452.775: 16.0985% ( 230) 00:07:52.929 6452.775 - 6503.188: 19.0637% ( 501) 00:07:52.929 6503.188 - 6553.600: 22.3366% ( 553) 00:07:52.929 6553.600 - 6604.012: 25.5741% ( 547) 00:07:52.929 6604.012 - 6654.425: 29.0187% ( 582) 00:07:52.929 6654.425 - 6704.837: 32.7000% ( 622) 00:07:52.929 6704.837 - 6755.249: 36.4051% ( 626) 00:07:52.929 6755.249 - 6805.662: 40.2580% ( 651) 00:07:52.929 6805.662 - 6856.074: 44.0282% ( 637) 00:07:52.929 6856.074 - 6906.486: 47.7391% ( 627) 00:07:52.929 6906.486 - 6956.898: 51.5092% ( 637) 00:07:52.929 6956.898 - 7007.311: 55.0663% ( 601) 00:07:52.929 7007.311 - 7057.723: 58.4162% ( 566) 00:07:52.929 7057.723 - 7108.135: 61.6300% ( 543) 00:07:52.929 7108.135 - 7158.548: 64.6958% ( 518) 00:07:52.929 7158.548 - 7208.960: 67.3118% ( 442) 00:07:52.929 7208.960 - 7259.372: 69.6023% ( 387) 00:07:52.929 7259.372 - 7309.785: 71.5554% ( 330) 00:07:52.929 7309.785 - 7360.197: 73.3487% ( 303) 00:07:52.929 7360.197 - 7410.609: 75.1065% ( 297) 00:07:52.929 7410.609 - 7461.022: 76.5152% ( 238) 00:07:52.929 7461.022 - 7511.434: 77.5627% ( 177) 00:07:52.929 7511.434 - 7561.846: 78.5097% ( 160) 00:07:52.929 7561.846 - 7612.258: 79.4508% ( 159) 00:07:52.929 7612.258 - 7662.671: 80.1314% ( 115) 00:07:52.929 7662.671 - 7713.083: 80.6937% ( 95) 00:07:52.929 7713.083 - 7763.495: 81.2500% ( 94) 00:07:52.929 7763.495 - 7813.908: 81.7886% ( 91) 00:07:52.929 7813.908 - 7864.320: 82.2443% ( 77) 00:07:52.929 7864.320 - 7914.732: 82.6645% ( 71) 00:07:52.929 7914.732 - 7965.145: 83.0552% ( 66) 00:07:52.929 7965.145 - 8015.557: 83.4458% ( 66) 00:07:52.929 8015.557 - 8065.969: 83.9193% ( 80) 00:07:52.929 8065.969 - 8116.382: 84.2981% ( 64) 00:07:52.929 8116.382 - 8166.794: 84.7183% ( 71) 00:07:52.929 8166.794 - 8217.206: 85.1385% ( 71) 00:07:52.929 8217.206 - 8267.618: 85.4877% ( 59) 00:07:52.929 8267.618 - 8318.031: 85.8546% ( 62) 00:07:52.929 8318.031 - 8368.443: 86.2334% ( 64) 00:07:52.930 8368.443 - 8418.855: 86.5649% ( 56) 00:07:52.930 8418.855 - 8469.268: 86.8726% ( 52) 00:07:52.930 8469.268 - 8519.680: 87.1626% ( 49) 00:07:52.930 8519.680 - 8570.092: 87.4231% ( 44) 00:07:52.930 8570.092 - 8620.505: 87.6953% ( 46) 00:07:52.930 8620.505 - 8670.917: 87.9321% ( 40) 00:07:52.930 8670.917 - 8721.329: 88.1806% ( 42) 00:07:52.930 8721.329 - 8771.742: 88.4351% ( 43) 00:07:52.930 8771.742 - 8822.154: 88.6541% ( 37) 00:07:52.930 8822.154 - 8872.566: 88.8554% ( 34) 00:07:52.930 8872.566 - 8922.978: 89.0684% ( 36) 00:07:52.930 8922.978 - 8973.391: 89.2578% ( 32) 00:07:52.930 8973.391 - 9023.803: 89.4531% ( 33) 00:07:52.930 9023.803 - 9074.215: 89.6248% ( 29) 00:07:52.930 9074.215 - 9124.628: 89.7786% ( 26) 00:07:52.930 9124.628 - 9175.040: 89.9029% ( 21) 00:07:52.930 9175.040 - 9225.452: 90.0154% ( 19) 00:07:52.930 9225.452 - 9275.865: 90.1278% ( 19) 00:07:52.930 9275.865 - 9326.277: 90.2344% ( 18) 00:07:52.930 9326.277 - 9376.689: 90.3468% ( 19) 00:07:52.930 9376.689 - 9427.102: 90.4770% ( 22) 00:07:52.930 9427.102 - 9477.514: 90.6132% ( 23) 00:07:52.930 9477.514 - 9527.926: 90.7315% ( 20) 00:07:52.930 9527.926 - 9578.338: 90.8262% ( 16) 00:07:52.930 9578.338 - 9628.751: 90.9091% ( 14) 00:07:52.930 9628.751 - 9679.163: 91.0215% ( 19) 00:07:52.930 9679.163 - 9729.575: 91.1222% ( 17) 00:07:52.930 9729.575 - 9779.988: 91.2287% ( 18) 00:07:52.930 9779.988 - 9830.400: 91.3707% ( 24) 00:07:52.930 9830.400 - 9880.812: 91.5246% ( 26) 00:07:52.930 9880.812 - 9931.225: 91.6726% ( 25) 00:07:52.930 9931.225 - 9981.637: 91.8205% ( 25) 00:07:52.930 9981.637 - 10032.049: 91.9567% ( 23) 00:07:52.930 10032.049 - 10082.462: 92.0987% ( 24) 00:07:52.930 10082.462 - 10132.874: 92.2230% ( 21) 00:07:52.930 10132.874 - 10183.286: 92.3887% ( 28) 00:07:52.930 10183.286 - 10233.698: 92.5545% ( 28) 00:07:52.930 10233.698 - 10284.111: 92.6728% ( 20) 00:07:52.930 10284.111 - 10334.523: 92.7912% ( 20) 00:07:52.930 10334.523 - 10384.935: 92.9155% ( 21) 00:07:52.930 10384.935 - 10435.348: 93.0871% ( 29) 00:07:52.930 10435.348 - 10485.760: 93.2173% ( 22) 00:07:52.930 10485.760 - 10536.172: 93.3890% ( 29) 00:07:52.930 10536.172 - 10586.585: 93.5429% ( 26) 00:07:52.930 10586.585 - 10636.997: 93.7086% ( 28) 00:07:52.930 10636.997 - 10687.409: 93.8506% ( 24) 00:07:52.930 10687.409 - 10737.822: 93.9631% ( 19) 00:07:52.930 10737.822 - 10788.234: 94.0282% ( 11) 00:07:52.930 10788.234 - 10838.646: 94.0933% ( 11) 00:07:52.930 10838.646 - 10889.058: 94.1643% ( 12) 00:07:52.930 10889.058 - 10939.471: 94.2176% ( 9) 00:07:52.930 10939.471 - 10989.883: 94.2768% ( 10) 00:07:52.930 10989.883 - 11040.295: 94.3419% ( 11) 00:07:52.930 11040.295 - 11090.708: 94.3833% ( 7) 00:07:52.930 11090.708 - 11141.120: 94.4247% ( 7) 00:07:52.930 11141.120 - 11191.532: 94.4721% ( 8) 00:07:52.930 11191.532 - 11241.945: 94.5135% ( 7) 00:07:52.930 11241.945 - 11292.357: 94.5490% ( 6) 00:07:52.930 11292.357 - 11342.769: 94.5786% ( 5) 00:07:52.930 11342.769 - 11393.182: 94.6141% ( 6) 00:07:52.930 11393.182 - 11443.594: 94.6496% ( 6) 00:07:52.930 11443.594 - 11494.006: 94.6792% ( 5) 00:07:52.930 11494.006 - 11544.418: 94.7088% ( 5) 00:07:52.930 11544.418 - 11594.831: 94.7325% ( 4) 00:07:52.930 11594.831 - 11645.243: 94.7680% ( 6) 00:07:52.930 11645.243 - 11695.655: 94.8035% ( 6) 00:07:52.930 11695.655 - 11746.068: 94.8804% ( 13) 00:07:52.930 11746.068 - 11796.480: 94.9337% ( 9) 00:07:52.930 11796.480 - 11846.892: 94.9692% ( 6) 00:07:52.930 11846.892 - 11897.305: 95.0047% ( 6) 00:07:52.930 11897.305 - 11947.717: 95.0462% ( 7) 00:07:52.930 11947.717 - 11998.129: 95.0935% ( 8) 00:07:52.930 11998.129 - 12048.542: 95.2178% ( 21) 00:07:52.930 12048.542 - 12098.954: 95.3835% ( 28) 00:07:52.930 12098.954 - 12149.366: 95.5729% ( 32) 00:07:52.930 12149.366 - 12199.778: 95.7031% ( 22) 00:07:52.930 12199.778 - 12250.191: 95.8866% ( 31) 00:07:52.930 12250.191 - 12300.603: 96.0227% ( 23) 00:07:52.930 12300.603 - 12351.015: 96.1589% ( 23) 00:07:52.930 12351.015 - 12401.428: 96.2358% ( 13) 00:07:52.930 12401.428 - 12451.840: 96.3187% ( 14) 00:07:52.930 12451.840 - 12502.252: 96.3601% ( 7) 00:07:52.930 12502.252 - 12552.665: 96.4193% ( 10) 00:07:52.930 12552.665 - 12603.077: 96.4785% ( 10) 00:07:52.930 12603.077 - 12653.489: 96.5436% ( 11) 00:07:52.930 12653.489 - 12703.902: 96.6027% ( 10) 00:07:52.930 12703.902 - 12754.314: 96.6501% ( 8) 00:07:52.930 12754.314 - 12804.726: 96.7034% ( 9) 00:07:52.930 12804.726 - 12855.138: 96.7507% ( 8) 00:07:52.930 12855.138 - 12905.551: 96.8040% ( 9) 00:07:52.930 12905.551 - 13006.375: 96.9164% ( 19) 00:07:52.930 13006.375 - 13107.200: 97.0230% ( 18) 00:07:52.930 13107.200 - 13208.025: 97.1354% ( 19) 00:07:52.930 13208.025 - 13308.849: 97.2538% ( 20) 00:07:52.930 13308.849 - 13409.674: 97.3662% ( 19) 00:07:52.930 13409.674 - 13510.498: 97.4787% ( 19) 00:07:52.930 13510.498 - 13611.323: 97.5852% ( 18) 00:07:52.930 13611.323 - 13712.148: 97.6918% ( 18) 00:07:52.930 13712.148 - 13812.972: 97.7273% ( 6) 00:07:52.930 14216.271 - 14317.095: 97.7450% ( 3) 00:07:52.930 14317.095 - 14417.920: 97.7628% ( 3) 00:07:52.930 14417.920 - 14518.745: 97.7865% ( 4) 00:07:52.930 14518.745 - 14619.569: 97.8042% ( 3) 00:07:52.930 14619.569 - 14720.394: 97.8279% ( 4) 00:07:52.930 14720.394 - 14821.218: 97.8516% ( 4) 00:07:52.930 14821.218 - 14922.043: 97.8693% ( 3) 00:07:52.930 14922.043 - 15022.868: 97.8930% ( 4) 00:07:52.930 15022.868 - 15123.692: 97.9226% ( 5) 00:07:52.930 15123.692 - 15224.517: 97.9699% ( 8) 00:07:52.930 15224.517 - 15325.342: 98.0350% ( 11) 00:07:52.930 15325.342 - 15426.166: 98.0883% ( 9) 00:07:52.930 15426.166 - 15526.991: 98.1475% ( 10) 00:07:52.930 15526.991 - 15627.815: 98.2126% ( 11) 00:07:52.930 15627.815 - 15728.640: 98.2777% ( 11) 00:07:52.930 15728.640 - 15829.465: 98.3546% ( 13) 00:07:52.930 15829.465 - 15930.289: 98.4612% ( 18) 00:07:52.930 15930.289 - 16031.114: 98.5677% ( 18) 00:07:52.930 16031.114 - 16131.938: 98.6683% ( 17) 00:07:52.930 16131.938 - 16232.763: 98.7689% ( 17) 00:07:52.930 16232.763 - 16333.588: 98.8577% ( 15) 00:07:52.930 16333.588 - 16434.412: 98.9287% ( 12) 00:07:52.930 16434.412 - 16535.237: 99.0057% ( 13) 00:07:52.930 16535.237 - 16636.062: 99.0885% ( 14) 00:07:52.930 16636.062 - 16736.886: 99.1655% ( 13) 00:07:52.930 16736.886 - 16837.711: 99.2069% ( 7) 00:07:52.930 16837.711 - 16938.535: 99.2365% ( 5) 00:07:52.930 16938.535 - 17039.360: 99.2424% ( 1) 00:07:52.930 25105.329 - 25206.154: 99.2720% ( 5) 00:07:52.930 25206.154 - 25306.978: 99.3194% ( 8) 00:07:52.930 25306.978 - 25407.803: 99.3608% ( 7) 00:07:52.930 25407.803 - 25508.628: 99.3963% ( 6) 00:07:52.930 25508.628 - 25609.452: 99.4377% ( 7) 00:07:52.930 25609.452 - 25710.277: 99.4732% ( 6) 00:07:52.930 25710.277 - 25811.102: 99.5088% ( 6) 00:07:52.930 25811.102 - 26012.751: 99.5739% ( 11) 00:07:52.930 26012.751 - 26214.400: 99.6212% ( 8) 00:07:52.930 29844.086 - 30045.735: 99.6745% ( 9) 00:07:52.930 30045.735 - 30247.385: 99.7455% ( 12) 00:07:52.930 30247.385 - 30449.034: 99.8106% ( 11) 00:07:52.930 30449.034 - 30650.683: 99.8816% ( 12) 00:07:52.930 30650.683 - 30852.332: 99.9527% ( 12) 00:07:52.930 30852.332 - 31053.982: 100.0000% ( 8) 00:07:52.930 00:07:52.930 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:52.930 ============================================================================== 00:07:52.930 Range in us Cumulative IO count 00:07:52.930 3881.748 - 3906.954: 0.0059% ( 1) 00:07:52.930 3906.954 - 3932.160: 0.0178% ( 2) 00:07:52.930 3932.160 - 3957.366: 0.0296% ( 2) 00:07:52.930 3957.366 - 3982.572: 0.0414% ( 2) 00:07:52.930 3982.572 - 4007.778: 0.0533% ( 2) 00:07:52.930 4007.778 - 4032.985: 0.0651% ( 2) 00:07:52.930 4032.985 - 4058.191: 0.0829% ( 3) 00:07:52.930 4058.191 - 4083.397: 0.1006% ( 3) 00:07:52.930 4083.397 - 4108.603: 0.1125% ( 2) 00:07:52.930 4108.603 - 4133.809: 0.1243% ( 2) 00:07:52.930 4133.809 - 4159.015: 0.1361% ( 2) 00:07:52.930 4159.015 - 4184.222: 0.1480% ( 2) 00:07:52.931 4184.222 - 4209.428: 0.1539% ( 1) 00:07:52.931 4209.428 - 4234.634: 0.1657% ( 2) 00:07:52.931 4234.634 - 4259.840: 0.1776% ( 2) 00:07:52.931 4259.840 - 4285.046: 0.1953% ( 3) 00:07:52.931 4285.046 - 4310.252: 0.2071% ( 2) 00:07:52.931 4310.252 - 4335.458: 0.2190% ( 2) 00:07:52.931 4335.458 - 4360.665: 0.2308% ( 2) 00:07:52.931 4360.665 - 4385.871: 0.2486% ( 3) 00:07:52.931 4385.871 - 4411.077: 0.2604% ( 2) 00:07:52.931 4411.077 - 4436.283: 0.2723% ( 2) 00:07:52.931 4436.283 - 4461.489: 0.2841% ( 2) 00:07:52.931 4461.489 - 4486.695: 0.3018% ( 3) 00:07:52.931 4486.695 - 4511.902: 0.3137% ( 2) 00:07:52.931 4511.902 - 4537.108: 0.3255% ( 2) 00:07:52.931 4537.108 - 4562.314: 0.3374% ( 2) 00:07:52.931 4562.314 - 4587.520: 0.3551% ( 3) 00:07:52.931 4587.520 - 4612.726: 0.3670% ( 2) 00:07:52.931 4612.726 - 4637.932: 0.3788% ( 2) 00:07:52.931 5520.148 - 5545.354: 0.3906% ( 2) 00:07:52.931 5545.354 - 5570.560: 0.4025% ( 2) 00:07:52.931 5570.560 - 5595.766: 0.4143% ( 2) 00:07:52.931 5595.766 - 5620.972: 0.4261% ( 2) 00:07:52.931 5620.972 - 5646.178: 0.4439% ( 3) 00:07:52.931 5646.178 - 5671.385: 0.4557% ( 2) 00:07:52.931 5671.385 - 5696.591: 0.4676% ( 2) 00:07:52.931 5696.591 - 5721.797: 0.4794% ( 2) 00:07:52.931 5721.797 - 5747.003: 0.4912% ( 2) 00:07:52.931 5747.003 - 5772.209: 0.5149% ( 4) 00:07:52.931 5772.209 - 5797.415: 0.5800% ( 11) 00:07:52.931 5797.415 - 5822.622: 0.6984% ( 20) 00:07:52.931 5822.622 - 5847.828: 0.7812% ( 14) 00:07:52.931 5847.828 - 5873.034: 0.9174% ( 23) 00:07:52.931 5873.034 - 5898.240: 1.0535% ( 23) 00:07:52.931 5898.240 - 5923.446: 1.2251% ( 29) 00:07:52.931 5923.446 - 5948.652: 1.4678% ( 41) 00:07:52.931 5948.652 - 5973.858: 1.6986% ( 39) 00:07:52.931 5973.858 - 5999.065: 2.0064% ( 52) 00:07:52.931 5999.065 - 6024.271: 2.2727% ( 45) 00:07:52.931 6024.271 - 6049.477: 2.5687% ( 50) 00:07:52.931 6049.477 - 6074.683: 2.9356% ( 62) 00:07:52.931 6074.683 - 6099.889: 3.3913% ( 77) 00:07:52.931 6099.889 - 6125.095: 3.8352% ( 75) 00:07:52.931 6125.095 - 6150.302: 4.3561% ( 88) 00:07:52.931 6150.302 - 6175.508: 5.0485% ( 117) 00:07:52.931 6175.508 - 6200.714: 5.7232% ( 114) 00:07:52.931 6200.714 - 6225.920: 6.5223% ( 135) 00:07:52.931 6225.920 - 6251.126: 7.2857% ( 129) 00:07:52.931 6251.126 - 6276.332: 8.1617% ( 148) 00:07:52.931 6276.332 - 6301.538: 9.0376% ( 148) 00:07:52.931 6301.538 - 6326.745: 9.9609% ( 156) 00:07:52.931 6326.745 - 6351.951: 11.1210% ( 196) 00:07:52.931 6351.951 - 6377.157: 12.2041% ( 183) 00:07:52.931 6377.157 - 6402.363: 13.4943% ( 218) 00:07:52.931 6402.363 - 6427.569: 14.7017% ( 204) 00:07:52.931 6427.569 - 6452.775: 16.0275% ( 224) 00:07:52.931 6452.775 - 6503.188: 18.8684% ( 480) 00:07:52.931 6503.188 - 6553.600: 22.0466% ( 537) 00:07:52.931 6553.600 - 6604.012: 25.5919% ( 599) 00:07:52.931 6604.012 - 6654.425: 28.9181% ( 562) 00:07:52.931 6654.425 - 6704.837: 32.5876% ( 620) 00:07:52.931 6704.837 - 6755.249: 36.5234% ( 665) 00:07:52.931 6755.249 - 6805.662: 40.3113% ( 640) 00:07:52.931 6805.662 - 6856.074: 44.1525% ( 649) 00:07:52.931 6856.074 - 6906.486: 47.8930% ( 632) 00:07:52.931 6906.486 - 6956.898: 51.6217% ( 630) 00:07:52.931 6956.898 - 7007.311: 55.2557% ( 614) 00:07:52.931 7007.311 - 7057.723: 58.5878% ( 563) 00:07:52.931 7057.723 - 7108.135: 61.6951% ( 525) 00:07:52.931 7108.135 - 7158.548: 64.5182% ( 477) 00:07:52.931 7158.548 - 7208.960: 67.0514% ( 428) 00:07:52.931 7208.960 - 7259.372: 69.3063% ( 381) 00:07:52.931 7259.372 - 7309.785: 71.2654% ( 331) 00:07:52.931 7309.785 - 7360.197: 73.1001% ( 310) 00:07:52.931 7360.197 - 7410.609: 74.7455% ( 278) 00:07:52.931 7410.609 - 7461.022: 76.1482% ( 237) 00:07:52.931 7461.022 - 7511.434: 77.3852% ( 209) 00:07:52.931 7511.434 - 7561.846: 78.4920% ( 187) 00:07:52.931 7561.846 - 7612.258: 79.3797% ( 150) 00:07:52.931 7612.258 - 7662.671: 80.1491% ( 130) 00:07:52.931 7662.671 - 7713.083: 80.8239% ( 114) 00:07:52.931 7713.083 - 7763.495: 81.3980% ( 97) 00:07:52.931 7763.495 - 7813.908: 81.8833% ( 82) 00:07:52.931 7813.908 - 7864.320: 82.3509% ( 79) 00:07:52.931 7864.320 - 7914.732: 82.7652% ( 70) 00:07:52.931 7914.732 - 7965.145: 83.1913% ( 72) 00:07:52.931 7965.145 - 8015.557: 83.6233% ( 73) 00:07:52.931 8015.557 - 8065.969: 84.0376% ( 70) 00:07:52.931 8065.969 - 8116.382: 84.5111% ( 80) 00:07:52.931 8116.382 - 8166.794: 84.9905% ( 81) 00:07:52.931 8166.794 - 8217.206: 85.4226% ( 73) 00:07:52.931 8217.206 - 8267.618: 85.8250% ( 68) 00:07:52.931 8267.618 - 8318.031: 86.2749% ( 76) 00:07:52.931 8318.031 - 8368.443: 86.6181% ( 58) 00:07:52.931 8368.443 - 8418.855: 86.8963% ( 47) 00:07:52.931 8418.855 - 8469.268: 87.1567% ( 44) 00:07:52.931 8469.268 - 8519.680: 87.4053% ( 42) 00:07:52.931 8519.680 - 8570.092: 87.6716% ( 45) 00:07:52.931 8570.092 - 8620.505: 87.9143% ( 41) 00:07:52.931 8620.505 - 8670.917: 88.1451% ( 39) 00:07:52.931 8670.917 - 8721.329: 88.3464% ( 34) 00:07:52.931 8721.329 - 8771.742: 88.5062% ( 27) 00:07:52.931 8771.742 - 8822.154: 88.6955% ( 32) 00:07:52.931 8822.154 - 8872.566: 88.8376% ( 24) 00:07:52.931 8872.566 - 8922.978: 88.9856% ( 25) 00:07:52.931 8922.978 - 8973.391: 89.1454% ( 27) 00:07:52.931 8973.391 - 9023.803: 89.3052% ( 27) 00:07:52.931 9023.803 - 9074.215: 89.4235% ( 20) 00:07:52.931 9074.215 - 9124.628: 89.5537% ( 22) 00:07:52.931 9124.628 - 9175.040: 89.6721% ( 20) 00:07:52.931 9175.040 - 9225.452: 89.7727% ( 17) 00:07:52.931 9225.452 - 9275.865: 89.9029% ( 22) 00:07:52.931 9275.865 - 9326.277: 90.0095% ( 18) 00:07:52.931 9326.277 - 9376.689: 90.1338% ( 21) 00:07:52.931 9376.689 - 9427.102: 90.2344% ( 17) 00:07:52.931 9427.102 - 9477.514: 90.3646% ( 22) 00:07:52.931 9477.514 - 9527.926: 90.4830% ( 20) 00:07:52.931 9527.926 - 9578.338: 90.5895% ( 18) 00:07:52.931 9578.338 - 9628.751: 90.7197% ( 22) 00:07:52.931 9628.751 - 9679.163: 90.8203% ( 17) 00:07:52.931 9679.163 - 9729.575: 90.9564% ( 23) 00:07:52.931 9729.575 - 9779.988: 91.1162% ( 27) 00:07:52.931 9779.988 - 9830.400: 91.2524% ( 23) 00:07:52.931 9830.400 - 9880.812: 91.4181% ( 28) 00:07:52.931 9880.812 - 9931.225: 91.6075% ( 32) 00:07:52.931 9931.225 - 9981.637: 91.7377% ( 22) 00:07:52.931 9981.637 - 10032.049: 91.8857% ( 25) 00:07:52.931 10032.049 - 10082.462: 92.0395% ( 26) 00:07:52.931 10082.462 - 10132.874: 92.2053% ( 28) 00:07:52.931 10132.874 - 10183.286: 92.3532% ( 25) 00:07:52.931 10183.286 - 10233.698: 92.5130% ( 27) 00:07:52.931 10233.698 - 10284.111: 92.6610% ( 25) 00:07:52.931 10284.111 - 10334.523: 92.8149% ( 26) 00:07:52.931 10334.523 - 10384.935: 92.9214% ( 18) 00:07:52.931 10384.935 - 10435.348: 93.0220% ( 17) 00:07:52.931 10435.348 - 10485.760: 93.0990% ( 13) 00:07:52.931 10485.760 - 10536.172: 93.1877% ( 15) 00:07:52.931 10536.172 - 10586.585: 93.3002% ( 19) 00:07:52.931 10586.585 - 10636.997: 93.4482% ( 25) 00:07:52.931 10636.997 - 10687.409: 93.5843% ( 23) 00:07:52.931 10687.409 - 10737.822: 93.7263% ( 24) 00:07:52.931 10737.822 - 10788.234: 93.8625% ( 23) 00:07:52.931 10788.234 - 10838.646: 93.9986% ( 23) 00:07:52.931 10838.646 - 10889.058: 94.1051% ( 18) 00:07:52.931 10889.058 - 10939.471: 94.1939% ( 15) 00:07:52.931 10939.471 - 10989.883: 94.2472% ( 9) 00:07:52.931 10989.883 - 11040.295: 94.3123% ( 11) 00:07:52.931 11040.295 - 11090.708: 94.3537% ( 7) 00:07:52.931 11090.708 - 11141.120: 94.4070% ( 9) 00:07:52.931 11141.120 - 11191.532: 94.4543% ( 8) 00:07:52.931 11191.532 - 11241.945: 94.4898% ( 6) 00:07:52.931 11241.945 - 11292.357: 94.5135% ( 4) 00:07:52.931 11292.357 - 11342.769: 94.5372% ( 4) 00:07:52.931 11342.769 - 11393.182: 94.5904% ( 9) 00:07:52.931 11393.182 - 11443.594: 94.6496% ( 10) 00:07:52.931 11443.594 - 11494.006: 94.8390% ( 32) 00:07:52.931 11494.006 - 11544.418: 94.9633% ( 21) 00:07:52.931 11544.418 - 11594.831: 95.0225% ( 10) 00:07:52.931 11594.831 - 11645.243: 95.0817% ( 10) 00:07:52.931 11645.243 - 11695.655: 95.1468% ( 11) 00:07:52.931 11695.655 - 11746.068: 95.2000% ( 9) 00:07:52.931 11746.068 - 11796.480: 95.2533% ( 9) 00:07:52.931 11796.480 - 11846.892: 95.3184% ( 11) 00:07:52.931 11846.892 - 11897.305: 95.3717% ( 9) 00:07:52.931 11897.305 - 11947.717: 95.4250% ( 9) 00:07:52.931 11947.717 - 11998.129: 95.4782% ( 9) 00:07:52.931 11998.129 - 12048.542: 95.5256% ( 8) 00:07:52.931 12048.542 - 12098.954: 95.5729% ( 8) 00:07:52.931 12098.954 - 12149.366: 95.6380% ( 11) 00:07:52.931 12149.366 - 12199.778: 95.7268% ( 15) 00:07:52.932 12199.778 - 12250.191: 95.8097% ( 14) 00:07:52.932 12250.191 - 12300.603: 95.8925% ( 14) 00:07:52.932 12300.603 - 12351.015: 95.9576% ( 11) 00:07:52.932 12351.015 - 12401.428: 96.0582% ( 17) 00:07:52.932 12401.428 - 12451.840: 96.1648% ( 18) 00:07:52.932 12451.840 - 12502.252: 96.2417% ( 13) 00:07:52.932 12502.252 - 12552.665: 96.3305% ( 15) 00:07:52.932 12552.665 - 12603.077: 96.4252% ( 16) 00:07:52.932 12603.077 - 12653.489: 96.4903% ( 11) 00:07:52.932 12653.489 - 12703.902: 96.5613% ( 12) 00:07:52.932 12703.902 - 12754.314: 96.6323% ( 12) 00:07:52.932 12754.314 - 12804.726: 96.7270% ( 16) 00:07:52.932 12804.726 - 12855.138: 96.8040% ( 13) 00:07:52.932 12855.138 - 12905.551: 96.8632% ( 10) 00:07:52.932 12905.551 - 13006.375: 96.9697% ( 18) 00:07:52.932 13006.375 - 13107.200: 97.0762% ( 18) 00:07:52.932 13107.200 - 13208.025: 97.1058% ( 5) 00:07:52.932 13208.025 - 13308.849: 97.1413% ( 6) 00:07:52.932 13308.849 - 13409.674: 97.1709% ( 5) 00:07:52.932 13409.674 - 13510.498: 97.1946% ( 4) 00:07:52.932 13510.498 - 13611.323: 97.2360% ( 7) 00:07:52.932 13611.323 - 13712.148: 97.3011% ( 11) 00:07:52.932 13712.148 - 13812.972: 97.3662% ( 11) 00:07:52.932 13812.972 - 13913.797: 97.4313% ( 11) 00:07:52.932 13913.797 - 14014.622: 97.4964% ( 11) 00:07:52.932 14014.622 - 14115.446: 97.5675% ( 12) 00:07:52.932 14115.446 - 14216.271: 97.6385% ( 12) 00:07:52.932 14216.271 - 14317.095: 97.7095% ( 12) 00:07:52.932 14317.095 - 14417.920: 97.7746% ( 11) 00:07:52.932 14417.920 - 14518.745: 97.8279% ( 9) 00:07:52.932 14518.745 - 14619.569: 97.8516% ( 4) 00:07:52.932 14619.569 - 14720.394: 97.8752% ( 4) 00:07:52.932 14720.394 - 14821.218: 97.9048% ( 5) 00:07:52.932 14821.218 - 14922.043: 97.9344% ( 5) 00:07:52.932 14922.043 - 15022.868: 97.9818% ( 8) 00:07:52.932 15022.868 - 15123.692: 98.0291% ( 8) 00:07:52.932 15123.692 - 15224.517: 98.0883% ( 10) 00:07:52.932 15224.517 - 15325.342: 98.1593% ( 12) 00:07:52.932 15325.342 - 15426.166: 98.2304% ( 12) 00:07:52.932 15426.166 - 15526.991: 98.3014% ( 12) 00:07:52.932 15526.991 - 15627.815: 98.4138% ( 19) 00:07:52.932 15627.815 - 15728.640: 98.4967% ( 14) 00:07:52.932 15728.640 - 15829.465: 98.5973% ( 17) 00:07:52.932 15829.465 - 15930.289: 98.6802% ( 14) 00:07:52.932 15930.289 - 16031.114: 98.7571% ( 13) 00:07:52.932 16031.114 - 16131.938: 98.8340% ( 13) 00:07:52.932 16131.938 - 16232.763: 98.9110% ( 13) 00:07:52.932 16232.763 - 16333.588: 98.9879% ( 13) 00:07:52.932 16333.588 - 16434.412: 99.0589% ( 12) 00:07:52.932 16434.412 - 16535.237: 99.1300% ( 12) 00:07:52.932 16535.237 - 16636.062: 99.1832% ( 9) 00:07:52.932 16636.062 - 16736.886: 99.2247% ( 7) 00:07:52.932 16736.886 - 16837.711: 99.2424% ( 3) 00:07:52.932 24500.382 - 24601.206: 99.2483% ( 1) 00:07:52.932 24601.206 - 24702.031: 99.2898% ( 7) 00:07:52.932 24702.031 - 24802.855: 99.3194% ( 5) 00:07:52.932 24802.855 - 24903.680: 99.3608% ( 7) 00:07:52.932 24903.680 - 25004.505: 99.3963% ( 6) 00:07:52.932 25004.505 - 25105.329: 99.4377% ( 7) 00:07:52.932 25105.329 - 25206.154: 99.4792% ( 7) 00:07:52.932 25206.154 - 25306.978: 99.5088% ( 5) 00:07:52.932 25306.978 - 25407.803: 99.5502% ( 7) 00:07:52.932 25407.803 - 25508.628: 99.5857% ( 6) 00:07:52.932 25508.628 - 25609.452: 99.6212% ( 6) 00:07:52.932 29239.138 - 29440.788: 99.6745% ( 9) 00:07:52.932 29440.788 - 29642.437: 99.7396% ( 11) 00:07:52.932 29642.437 - 29844.086: 99.8165% ( 13) 00:07:52.932 29844.086 - 30045.735: 99.8935% ( 13) 00:07:52.932 30045.735 - 30247.385: 99.9763% ( 14) 00:07:52.932 30247.385 - 30449.034: 100.0000% ( 4) 00:07:52.932 00:07:52.932 06:00:18 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:54.315 Initializing NVMe Controllers 00:07:54.315 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.315 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.315 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.315 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.315 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:54.315 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:54.315 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:54.315 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:54.315 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:54.315 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:54.315 Initialization complete. Launching workers. 00:07:54.315 ======================================================== 00:07:54.315 Latency(us) 00:07:54.315 Device Information : IOPS MiB/s Average min max 00:07:54.315 PCIE (0000:00:13.0) NSID 1 from core 0: 17216.87 201.76 7437.98 5493.90 31345.08 00:07:54.315 PCIE (0000:00:10.0) NSID 1 from core 0: 17280.63 202.51 7404.35 5184.09 27147.69 00:07:54.315 PCIE (0000:00:11.0) NSID 1 from core 0: 17280.63 202.51 7398.05 5048.67 25973.84 00:07:54.315 PCIE (0000:00:12.0) NSID 1 from core 0: 17280.63 202.51 7392.05 4105.47 25846.17 00:07:54.315 PCIE (0000:00:12.0) NSID 2 from core 0: 17280.63 202.51 7386.44 3987.59 25610.94 00:07:54.315 PCIE (0000:00:12.0) NSID 3 from core 0: 17280.63 202.51 7380.80 4053.95 25315.98 00:07:54.315 ======================================================== 00:07:54.315 Total : 103620.03 1214.30 7399.92 3987.59 31345.08 00:07:54.315 00:07:54.315 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.315 ================================================================================= 00:07:54.315 1.00000% : 5999.065us 00:07:54.315 10.00000% : 6326.745us 00:07:54.316 25.00000% : 6553.600us 00:07:54.316 50.00000% : 6906.486us 00:07:54.316 75.00000% : 7461.022us 00:07:54.316 90.00000% : 8721.329us 00:07:54.316 95.00000% : 9779.988us 00:07:54.316 98.00000% : 11342.769us 00:07:54.316 99.00000% : 23492.135us 00:07:54.316 99.50000% : 26012.751us 00:07:54.316 99.90000% : 31255.631us 00:07:54.316 99.99000% : 31457.280us 00:07:54.316 99.99900% : 31457.280us 00:07:54.316 99.99990% : 31457.280us 00:07:54.316 99.99999% : 31457.280us 00:07:54.316 00:07:54.316 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.316 ================================================================================= 00:07:54.316 1.00000% : 5923.446us 00:07:54.316 10.00000% : 6251.126us 00:07:54.316 25.00000% : 6553.600us 00:07:54.316 50.00000% : 6956.898us 00:07:54.316 75.00000% : 7511.434us 00:07:54.316 90.00000% : 8771.742us 00:07:54.316 95.00000% : 9578.338us 00:07:54.316 98.00000% : 11241.945us 00:07:54.316 99.00000% : 20366.572us 00:07:54.316 99.50000% : 25004.505us 00:07:54.316 99.90000% : 26819.348us 00:07:54.316 99.99000% : 27222.646us 00:07:54.316 99.99900% : 27222.646us 00:07:54.316 99.99990% : 27222.646us 00:07:54.316 99.99999% : 27222.646us 00:07:54.316 00:07:54.316 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.316 ================================================================================= 00:07:54.316 1.00000% : 6024.271us 00:07:54.316 10.00000% : 6326.745us 00:07:54.316 25.00000% : 6553.600us 00:07:54.316 50.00000% : 6906.486us 00:07:54.316 75.00000% : 7461.022us 00:07:54.316 90.00000% : 8822.154us 00:07:54.316 95.00000% : 9628.751us 00:07:54.316 98.00000% : 11544.418us 00:07:54.316 99.00000% : 22483.889us 00:07:54.316 99.50000% : 22786.363us 00:07:54.316 99.90000% : 25710.277us 00:07:54.316 99.99000% : 26012.751us 00:07:54.316 99.99900% : 26012.751us 00:07:54.316 99.99990% : 26012.751us 00:07:54.316 99.99999% : 26012.751us 00:07:54.316 00:07:54.316 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.316 ================================================================================= 00:07:54.316 1.00000% : 5973.858us 00:07:54.316 10.00000% : 6326.745us 00:07:54.316 25.00000% : 6553.600us 00:07:54.316 50.00000% : 6906.486us 00:07:54.316 75.00000% : 7461.022us 00:07:54.316 90.00000% : 8822.154us 00:07:54.316 95.00000% : 9628.751us 00:07:54.316 98.00000% : 11292.357us 00:07:54.316 99.00000% : 23189.662us 00:07:54.316 99.50000% : 23391.311us 00:07:54.316 99.90000% : 25508.628us 00:07:54.316 99.99000% : 26012.751us 00:07:54.316 99.99900% : 26012.751us 00:07:54.316 99.99990% : 26012.751us 00:07:54.316 99.99999% : 26012.751us 00:07:54.316 00:07:54.316 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.316 ================================================================================= 00:07:54.316 1.00000% : 5948.652us 00:07:54.316 10.00000% : 6326.745us 00:07:54.316 25.00000% : 6553.600us 00:07:54.316 50.00000% : 6906.486us 00:07:54.316 75.00000% : 7511.434us 00:07:54.316 90.00000% : 8771.742us 00:07:54.316 95.00000% : 9527.926us 00:07:54.316 98.00000% : 10989.883us 00:07:54.316 99.00000% : 23391.311us 00:07:54.316 99.50000% : 23592.960us 00:07:54.316 99.90000% : 25306.978us 00:07:54.316 99.99000% : 25609.452us 00:07:54.316 99.99900% : 25710.277us 00:07:54.316 99.99990% : 25710.277us 00:07:54.316 99.99999% : 25710.277us 00:07:54.316 00:07:54.316 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.316 ================================================================================= 00:07:54.316 1.00000% : 5948.652us 00:07:54.316 10.00000% : 6326.745us 00:07:54.316 25.00000% : 6553.600us 00:07:54.316 50.00000% : 6906.486us 00:07:54.316 75.00000% : 7461.022us 00:07:54.316 90.00000% : 8670.917us 00:07:54.316 95.00000% : 9527.926us 00:07:54.316 98.00000% : 10838.646us 00:07:54.316 99.00000% : 23290.486us 00:07:54.316 99.50000% : 23794.609us 00:07:54.316 99.90000% : 25004.505us 00:07:54.316 99.99000% : 25306.978us 00:07:54.316 99.99900% : 25407.803us 00:07:54.316 99.99990% : 25407.803us 00:07:54.316 99.99999% : 25407.803us 00:07:54.316 00:07:54.316 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.316 ============================================================================== 00:07:54.316 Range in us Cumulative IO count 00:07:54.316 5469.735 - 5494.942: 0.0058% ( 1) 00:07:54.316 5494.942 - 5520.148: 0.0116% ( 1) 00:07:54.316 5520.148 - 5545.354: 0.0174% ( 1) 00:07:54.316 5545.354 - 5570.560: 0.0231% ( 1) 00:07:54.316 5570.560 - 5595.766: 0.0347% ( 2) 00:07:54.316 5595.766 - 5620.972: 0.0463% ( 2) 00:07:54.316 5620.972 - 5646.178: 0.0694% ( 4) 00:07:54.316 5646.178 - 5671.385: 0.1273% ( 10) 00:07:54.316 5671.385 - 5696.591: 0.2025% ( 13) 00:07:54.316 5696.591 - 5721.797: 0.2778% ( 13) 00:07:54.316 5721.797 - 5747.003: 0.3299% ( 9) 00:07:54.316 5747.003 - 5772.209: 0.3472% ( 3) 00:07:54.316 5772.209 - 5797.415: 0.3877% ( 7) 00:07:54.316 5797.415 - 5822.622: 0.4225% ( 6) 00:07:54.316 5822.622 - 5847.828: 0.4514% ( 5) 00:07:54.316 5847.828 - 5873.034: 0.5035% ( 9) 00:07:54.316 5873.034 - 5898.240: 0.6829% ( 31) 00:07:54.316 5898.240 - 5923.446: 0.7234% ( 7) 00:07:54.316 5923.446 - 5948.652: 0.7697% ( 8) 00:07:54.316 5948.652 - 5973.858: 0.8681% ( 17) 00:07:54.316 5973.858 - 5999.065: 1.0012% ( 23) 00:07:54.316 5999.065 - 6024.271: 1.1285% ( 22) 00:07:54.316 6024.271 - 6049.477: 1.3137% ( 32) 00:07:54.316 6049.477 - 6074.683: 1.5336% ( 38) 00:07:54.316 6074.683 - 6099.889: 2.0255% ( 85) 00:07:54.316 6099.889 - 6125.095: 2.4016% ( 65) 00:07:54.316 6125.095 - 6150.302: 2.9340% ( 92) 00:07:54.316 6150.302 - 6175.508: 3.6285% ( 120) 00:07:54.316 6175.508 - 6200.714: 4.6354% ( 174) 00:07:54.316 6200.714 - 6225.920: 5.6424% ( 174) 00:07:54.316 6225.920 - 6251.126: 6.6030% ( 166) 00:07:54.316 6251.126 - 6276.332: 7.6389% ( 179) 00:07:54.316 6276.332 - 6301.538: 8.9699% ( 230) 00:07:54.316 6301.538 - 6326.745: 11.0532% ( 360) 00:07:54.316 6326.745 - 6351.951: 12.6968% ( 284) 00:07:54.316 6351.951 - 6377.157: 14.1493% ( 251) 00:07:54.316 6377.157 - 6402.363: 16.2269% ( 359) 00:07:54.316 6402.363 - 6427.569: 17.7315% ( 260) 00:07:54.316 6427.569 - 6452.775: 18.9931% ( 218) 00:07:54.316 6452.775 - 6503.188: 21.8866% ( 500) 00:07:54.316 6503.188 - 6553.600: 25.4109% ( 609) 00:07:54.316 6553.600 - 6604.012: 29.2188% ( 658) 00:07:54.316 6604.012 - 6654.425: 33.1424% ( 678) 00:07:54.316 6654.425 - 6704.837: 36.3947% ( 562) 00:07:54.316 6704.837 - 6755.249: 39.8032% ( 589) 00:07:54.316 6755.249 - 6805.662: 44.1088% ( 744) 00:07:54.316 6805.662 - 6856.074: 47.4653% ( 580) 00:07:54.316 6856.074 - 6906.486: 50.2199% ( 476) 00:07:54.316 6906.486 - 6956.898: 53.0035% ( 481) 00:07:54.316 6956.898 - 7007.311: 54.9884% ( 343) 00:07:54.316 7007.311 - 7057.723: 57.6215% ( 455) 00:07:54.316 7057.723 - 7108.135: 60.5266% ( 502) 00:07:54.316 7108.135 - 7158.548: 62.4884% ( 339) 00:07:54.316 7158.548 - 7208.960: 64.9074% ( 418) 00:07:54.316 7208.960 - 7259.372: 67.4132% ( 433) 00:07:54.316 7259.372 - 7309.785: 69.9016% ( 430) 00:07:54.316 7309.785 - 7360.197: 72.2106% ( 399) 00:07:54.316 7360.197 - 7410.609: 73.9468% ( 300) 00:07:54.317 7410.609 - 7461.022: 75.4167% ( 254) 00:07:54.317 7461.022 - 7511.434: 76.8113% ( 241) 00:07:54.317 7511.434 - 7561.846: 77.5637% ( 130) 00:07:54.317 7561.846 - 7612.258: 78.1829% ( 107) 00:07:54.317 7612.258 - 7662.671: 78.6169% ( 75) 00:07:54.317 7662.671 - 7713.083: 79.0683% ( 78) 00:07:54.317 7713.083 - 7763.495: 79.5544% ( 84) 00:07:54.317 7763.495 - 7813.908: 80.3009% ( 129) 00:07:54.317 7813.908 - 7864.320: 80.9722% ( 116) 00:07:54.317 7864.320 - 7914.732: 81.4641% ( 85) 00:07:54.317 7914.732 - 7965.145: 82.0544% ( 102) 00:07:54.317 7965.145 - 8015.557: 82.9977% ( 163) 00:07:54.317 8015.557 - 8065.969: 83.8368% ( 145) 00:07:54.317 8065.969 - 8116.382: 84.5660% ( 126) 00:07:54.317 8116.382 - 8166.794: 85.0926% ( 91) 00:07:54.317 8166.794 - 8217.206: 85.5035% ( 71) 00:07:54.317 8217.206 - 8267.618: 85.9375% ( 75) 00:07:54.317 8267.618 - 8318.031: 86.4525% ( 89) 00:07:54.317 8318.031 - 8368.443: 87.0544% ( 104) 00:07:54.317 8368.443 - 8418.855: 87.7141% ( 114) 00:07:54.317 8418.855 - 8469.268: 88.2523% ( 93) 00:07:54.317 8469.268 - 8519.680: 88.7269% ( 82) 00:07:54.317 8519.680 - 8570.092: 89.1030% ( 65) 00:07:54.317 8570.092 - 8620.505: 89.3576% ( 44) 00:07:54.317 8620.505 - 8670.917: 89.8206% ( 80) 00:07:54.317 8670.917 - 8721.329: 90.2315% ( 71) 00:07:54.317 8721.329 - 8771.742: 90.8738% ( 111) 00:07:54.317 8771.742 - 8822.154: 91.3831% ( 88) 00:07:54.317 8822.154 - 8872.566: 91.8229% ( 76) 00:07:54.317 8872.566 - 8922.978: 92.1933% ( 64) 00:07:54.317 8922.978 - 8973.391: 92.6389% ( 77) 00:07:54.317 8973.391 - 9023.803: 92.9688% ( 57) 00:07:54.317 9023.803 - 9074.215: 93.2350% ( 46) 00:07:54.317 9074.215 - 9124.628: 93.4606% ( 39) 00:07:54.317 9124.628 - 9175.040: 93.7269% ( 46) 00:07:54.317 9175.040 - 9225.452: 93.9120% ( 32) 00:07:54.317 9225.452 - 9275.865: 94.0278% ( 20) 00:07:54.317 9275.865 - 9326.277: 94.1204% ( 16) 00:07:54.317 9326.277 - 9376.689: 94.3519% ( 40) 00:07:54.317 9376.689 - 9427.102: 94.4444% ( 16) 00:07:54.317 9427.102 - 9477.514: 94.5833% ( 24) 00:07:54.317 9477.514 - 9527.926: 94.7512% ( 29) 00:07:54.317 9527.926 - 9578.338: 94.8148% ( 11) 00:07:54.317 9578.338 - 9628.751: 94.8727% ( 10) 00:07:54.317 9628.751 - 9679.163: 94.9248% ( 9) 00:07:54.317 9679.163 - 9729.575: 94.9826% ( 10) 00:07:54.317 9729.575 - 9779.988: 95.1852% ( 35) 00:07:54.317 9779.988 - 9830.400: 95.2951% ( 19) 00:07:54.317 9830.400 - 9880.812: 95.4340% ( 24) 00:07:54.317 9880.812 - 9931.225: 95.6539% ( 38) 00:07:54.317 9931.225 - 9981.637: 95.8333% ( 31) 00:07:54.317 9981.637 - 10032.049: 95.9375% ( 18) 00:07:54.317 10032.049 - 10082.462: 96.0359% ( 17) 00:07:54.317 10082.462 - 10132.874: 96.4352% ( 69) 00:07:54.317 10132.874 - 10183.286: 96.6725% ( 41) 00:07:54.317 10183.286 - 10233.698: 96.7708% ( 17) 00:07:54.317 10233.698 - 10284.111: 96.8461% ( 13) 00:07:54.317 10284.111 - 10334.523: 96.9097% ( 11) 00:07:54.317 10334.523 - 10384.935: 97.0023% ( 16) 00:07:54.317 10384.935 - 10435.348: 97.1528% ( 26) 00:07:54.317 10435.348 - 10485.760: 97.2685% ( 20) 00:07:54.317 10485.760 - 10536.172: 97.3264% ( 10) 00:07:54.317 10536.172 - 10586.585: 97.3785% ( 9) 00:07:54.317 10586.585 - 10636.997: 97.4074% ( 5) 00:07:54.317 10636.997 - 10687.409: 97.4421% ( 6) 00:07:54.317 10687.409 - 10737.822: 97.4769% ( 6) 00:07:54.317 10737.822 - 10788.234: 97.5405% ( 11) 00:07:54.317 10788.234 - 10838.646: 97.6042% ( 11) 00:07:54.317 10838.646 - 10889.058: 97.6620% ( 10) 00:07:54.317 10889.058 - 10939.471: 97.7257% ( 11) 00:07:54.317 10939.471 - 10989.883: 97.7604% ( 6) 00:07:54.317 10989.883 - 11040.295: 97.7720% ( 2) 00:07:54.317 11040.295 - 11090.708: 97.7894% ( 3) 00:07:54.317 11090.708 - 11141.120: 97.8299% ( 7) 00:07:54.317 11141.120 - 11191.532: 97.8588% ( 5) 00:07:54.317 11191.532 - 11241.945: 97.8993% ( 7) 00:07:54.317 11241.945 - 11292.357: 97.9456% ( 8) 00:07:54.317 11292.357 - 11342.769: 98.0440% ( 17) 00:07:54.317 11342.769 - 11393.182: 98.0903% ( 8) 00:07:54.317 11393.182 - 11443.594: 98.1308% ( 7) 00:07:54.317 11443.594 - 11494.006: 98.1655% ( 6) 00:07:54.317 11494.006 - 11544.418: 98.2234% ( 10) 00:07:54.317 11544.418 - 11594.831: 98.2812% ( 10) 00:07:54.317 11594.831 - 11645.243: 98.3218% ( 7) 00:07:54.317 11645.243 - 11695.655: 98.3507% ( 5) 00:07:54.317 11695.655 - 11746.068: 98.3738% ( 4) 00:07:54.317 11746.068 - 11796.480: 98.4028% ( 5) 00:07:54.317 11796.480 - 11846.892: 98.4375% ( 6) 00:07:54.317 11846.892 - 11897.305: 98.4549% ( 3) 00:07:54.317 11897.305 - 11947.717: 98.4664% ( 2) 00:07:54.317 11947.717 - 11998.129: 98.4722% ( 1) 00:07:54.317 12048.542 - 12098.954: 98.4896% ( 3) 00:07:54.317 12098.954 - 12149.366: 98.4954% ( 1) 00:07:54.317 12149.366 - 12199.778: 98.5069% ( 2) 00:07:54.317 12199.778 - 12250.191: 98.5185% ( 2) 00:07:54.317 18652.554 - 18753.378: 98.5243% ( 1) 00:07:54.317 18753.378 - 18854.203: 98.5532% ( 5) 00:07:54.317 18854.203 - 18955.028: 98.5822% ( 5) 00:07:54.317 18955.028 - 19055.852: 98.6169% ( 6) 00:07:54.317 19055.852 - 19156.677: 98.6458% ( 5) 00:07:54.317 19156.677 - 19257.502: 98.6806% ( 6) 00:07:54.317 19257.502 - 19358.326: 98.7095% ( 5) 00:07:54.317 19358.326 - 19459.151: 98.7269% ( 3) 00:07:54.317 19459.151 - 19559.975: 98.7500% ( 4) 00:07:54.317 19559.975 - 19660.800: 98.7789% ( 5) 00:07:54.317 19660.800 - 19761.625: 98.8021% ( 4) 00:07:54.317 19862.449 - 19963.274: 98.8079% ( 1) 00:07:54.317 19963.274 - 20064.098: 98.8252% ( 3) 00:07:54.317 20064.098 - 20164.923: 98.8542% ( 5) 00:07:54.317 20164.923 - 20265.748: 98.8831% ( 5) 00:07:54.317 20265.748 - 20366.572: 98.8889% ( 1) 00:07:54.317 23290.486 - 23391.311: 98.9062% ( 3) 00:07:54.317 23391.311 - 23492.135: 99.2535% ( 60) 00:07:54.317 23492.135 - 23592.960: 99.2593% ( 1) 00:07:54.317 25004.505 - 25105.329: 99.2824% ( 4) 00:07:54.317 25105.329 - 25206.154: 99.3692% ( 15) 00:07:54.317 25206.154 - 25306.978: 99.4097% ( 7) 00:07:54.317 25306.978 - 25407.803: 99.4387% ( 5) 00:07:54.317 25407.803 - 25508.628: 99.4676% ( 5) 00:07:54.317 25811.102 - 26012.751: 99.5544% ( 15) 00:07:54.317 26012.751 - 26214.400: 99.6296% ( 13) 00:07:54.317 30449.034 - 30650.683: 99.6991% ( 12) 00:07:54.317 30650.683 - 30852.332: 99.7859% ( 15) 00:07:54.317 30852.332 - 31053.982: 99.8727% ( 15) 00:07:54.317 31053.982 - 31255.631: 99.9653% ( 16) 00:07:54.317 31255.631 - 31457.280: 100.0000% ( 6) 00:07:54.317 00:07:54.317 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.317 ============================================================================== 00:07:54.317 Range in us Cumulative IO count 00:07:54.317 5167.262 - 5192.468: 0.0058% ( 1) 00:07:54.317 5192.468 - 5217.674: 0.0173% ( 2) 00:07:54.317 5217.674 - 5242.880: 0.0288% ( 2) 00:07:54.317 5242.880 - 5268.086: 0.0404% ( 2) 00:07:54.317 5268.086 - 5293.292: 0.0577% ( 3) 00:07:54.317 5293.292 - 5318.498: 0.0692% ( 2) 00:07:54.317 5318.498 - 5343.705: 0.0865% ( 3) 00:07:54.317 5343.705 - 5368.911: 0.1153% ( 5) 00:07:54.317 5368.911 - 5394.117: 0.1268% ( 2) 00:07:54.318 5394.117 - 5419.323: 0.1557% ( 5) 00:07:54.318 5419.323 - 5444.529: 0.1845% ( 5) 00:07:54.318 5444.529 - 5469.735: 0.2191% ( 6) 00:07:54.318 5469.735 - 5494.942: 0.2710% ( 9) 00:07:54.318 5494.942 - 5520.148: 0.3459% ( 13) 00:07:54.318 5520.148 - 5545.354: 0.3805% ( 6) 00:07:54.318 5545.354 - 5570.560: 0.3978% ( 3) 00:07:54.318 5570.560 - 5595.766: 0.4151% ( 3) 00:07:54.318 5595.766 - 5620.972: 0.4209% ( 1) 00:07:54.318 5620.972 - 5646.178: 0.4267% ( 1) 00:07:54.318 5646.178 - 5671.385: 0.4324% ( 1) 00:07:54.318 5671.385 - 5696.591: 0.4382% ( 1) 00:07:54.318 5747.003 - 5772.209: 0.4958% ( 10) 00:07:54.318 5772.209 - 5797.415: 0.5535% ( 10) 00:07:54.318 5797.415 - 5822.622: 0.6112% ( 10) 00:07:54.318 5822.622 - 5847.828: 0.7784% ( 29) 00:07:54.318 5847.828 - 5873.034: 0.9110% ( 23) 00:07:54.318 5873.034 - 5898.240: 0.9686% ( 10) 00:07:54.318 5898.240 - 5923.446: 1.0321% ( 11) 00:07:54.318 5923.446 - 5948.652: 1.0897% ( 10) 00:07:54.318 5948.652 - 5973.858: 1.2857% ( 34) 00:07:54.318 5973.858 - 5999.065: 1.5683% ( 49) 00:07:54.318 5999.065 - 6024.271: 2.0699% ( 87) 00:07:54.318 6024.271 - 6049.477: 2.6810% ( 106) 00:07:54.318 6049.477 - 6074.683: 3.3556% ( 117) 00:07:54.318 6074.683 - 6099.889: 4.5030% ( 199) 00:07:54.318 6099.889 - 6125.095: 5.4428% ( 163) 00:07:54.318 6125.095 - 6150.302: 6.2788% ( 145) 00:07:54.318 6150.302 - 6175.508: 7.3512% ( 186) 00:07:54.318 6175.508 - 6200.714: 8.4467% ( 190) 00:07:54.318 6200.714 - 6225.920: 9.6863% ( 215) 00:07:54.318 6225.920 - 6251.126: 10.9779% ( 224) 00:07:54.318 6251.126 - 6276.332: 12.4135% ( 249) 00:07:54.318 6276.332 - 6301.538: 13.4571% ( 181) 00:07:54.318 6301.538 - 6326.745: 14.3854% ( 161) 00:07:54.318 6326.745 - 6351.951: 15.4001% ( 176) 00:07:54.318 6351.951 - 6377.157: 16.2938% ( 155) 00:07:54.318 6377.157 - 6402.363: 17.1529% ( 149) 00:07:54.318 6402.363 - 6427.569: 18.1850% ( 179) 00:07:54.318 6427.569 - 6452.775: 19.3554% ( 203) 00:07:54.318 6452.775 - 6503.188: 22.3247% ( 515) 00:07:54.318 6503.188 - 6553.600: 25.4958% ( 550) 00:07:54.318 6553.600 - 6604.012: 29.2205% ( 646) 00:07:54.318 6604.012 - 6654.425: 32.8240% ( 625) 00:07:54.318 6654.425 - 6704.837: 36.5602% ( 648) 00:07:54.318 6704.837 - 6755.249: 40.5039% ( 684) 00:07:54.318 6755.249 - 6805.662: 43.8307% ( 577) 00:07:54.318 6805.662 - 6856.074: 47.1806% ( 581) 00:07:54.318 6856.074 - 6906.486: 49.7636% ( 448) 00:07:54.318 6906.486 - 6956.898: 52.2832% ( 437) 00:07:54.318 6956.898 - 7007.311: 54.8835% ( 451) 00:07:54.318 7007.311 - 7057.723: 57.4262% ( 441) 00:07:54.318 7057.723 - 7108.135: 59.9804% ( 443) 00:07:54.318 7108.135 - 7158.548: 62.6499% ( 463) 00:07:54.318 7158.548 - 7208.960: 65.1753% ( 438) 00:07:54.318 7208.960 - 7259.372: 67.4700% ( 398) 00:07:54.318 7259.372 - 7309.785: 69.5918% ( 368) 00:07:54.318 7309.785 - 7360.197: 71.4772% ( 327) 00:07:54.318 7360.197 - 7410.609: 72.9647% ( 258) 00:07:54.318 7410.609 - 7461.022: 74.2274% ( 219) 00:07:54.318 7461.022 - 7511.434: 75.4151% ( 206) 00:07:54.318 7511.434 - 7561.846: 76.3953% ( 170) 00:07:54.318 7561.846 - 7612.258: 77.2140% ( 142) 00:07:54.318 7612.258 - 7662.671: 77.9463% ( 127) 00:07:54.318 7662.671 - 7713.083: 78.6843% ( 128) 00:07:54.318 7713.083 - 7763.495: 79.4280% ( 129) 00:07:54.318 7763.495 - 7813.908: 80.2698% ( 146) 00:07:54.318 7813.908 - 7864.320: 80.9271% ( 114) 00:07:54.318 7864.320 - 7914.732: 81.4749% ( 95) 00:07:54.318 7914.732 - 7965.145: 82.0168% ( 94) 00:07:54.318 7965.145 - 8015.557: 82.7664% ( 130) 00:07:54.318 8015.557 - 8065.969: 83.2853% ( 90) 00:07:54.318 8065.969 - 8116.382: 83.8388% ( 96) 00:07:54.318 8116.382 - 8166.794: 84.4038% ( 98) 00:07:54.318 8166.794 - 8217.206: 84.8824% ( 83) 00:07:54.318 8217.206 - 8267.618: 85.3436% ( 80) 00:07:54.318 8267.618 - 8318.031: 85.8741% ( 92) 00:07:54.318 8318.031 - 8368.443: 86.5833% ( 123) 00:07:54.318 8368.443 - 8418.855: 87.1195% ( 93) 00:07:54.318 8418.855 - 8469.268: 87.4827% ( 63) 00:07:54.318 8469.268 - 8519.680: 88.0131% ( 92) 00:07:54.318 8519.680 - 8570.092: 88.4686% ( 79) 00:07:54.318 8570.092 - 8620.505: 88.8895% ( 73) 00:07:54.318 8620.505 - 8670.917: 89.3623% ( 82) 00:07:54.318 8670.917 - 8721.329: 89.7601% ( 69) 00:07:54.318 8721.329 - 8771.742: 90.1926% ( 75) 00:07:54.318 8771.742 - 8822.154: 90.6769% ( 84) 00:07:54.318 8822.154 - 8872.566: 91.0517% ( 65) 00:07:54.318 8872.566 - 8922.978: 91.3572% ( 53) 00:07:54.318 8922.978 - 8973.391: 91.6801% ( 56) 00:07:54.318 8973.391 - 9023.803: 92.0664% ( 67) 00:07:54.318 9023.803 - 9074.215: 92.4758% ( 71) 00:07:54.318 9074.215 - 9124.628: 92.8275% ( 61) 00:07:54.318 9124.628 - 9175.040: 93.0985% ( 47) 00:07:54.318 9175.040 - 9225.452: 93.4156% ( 55) 00:07:54.318 9225.452 - 9275.865: 93.7385% ( 56) 00:07:54.318 9275.865 - 9326.277: 93.9864% ( 43) 00:07:54.318 9326.277 - 9376.689: 94.2343% ( 43) 00:07:54.318 9376.689 - 9427.102: 94.4419% ( 36) 00:07:54.318 9427.102 - 9477.514: 94.7013% ( 45) 00:07:54.318 9477.514 - 9527.926: 94.9089% ( 36) 00:07:54.318 9527.926 - 9578.338: 95.1568% ( 43) 00:07:54.318 9578.338 - 9628.751: 95.2721% ( 20) 00:07:54.318 9628.751 - 9679.163: 95.3759% ( 18) 00:07:54.318 9679.163 - 9729.575: 95.5028% ( 22) 00:07:54.318 9729.575 - 9779.988: 95.6238% ( 21) 00:07:54.318 9779.988 - 9830.400: 95.7103% ( 15) 00:07:54.318 9830.400 - 9880.812: 95.8199% ( 19) 00:07:54.318 9880.812 - 9931.225: 95.9583% ( 24) 00:07:54.318 9931.225 - 9981.637: 96.1485% ( 33) 00:07:54.318 9981.637 - 10032.049: 96.3503% ( 35) 00:07:54.318 10032.049 - 10082.462: 96.4426% ( 16) 00:07:54.318 10082.462 - 10132.874: 96.5406% ( 17) 00:07:54.318 10132.874 - 10183.286: 96.6963% ( 27) 00:07:54.318 10183.286 - 10233.698: 96.8000% ( 18) 00:07:54.318 10233.698 - 10284.111: 96.9038% ( 18) 00:07:54.318 10284.111 - 10334.523: 97.0076% ( 18) 00:07:54.318 10334.523 - 10384.935: 97.0710% ( 11) 00:07:54.318 10384.935 - 10435.348: 97.1575% ( 15) 00:07:54.318 10435.348 - 10485.760: 97.2267% ( 12) 00:07:54.318 10485.760 - 10536.172: 97.2959% ( 12) 00:07:54.318 10536.172 - 10586.585: 97.3766% ( 14) 00:07:54.318 10586.585 - 10636.997: 97.4285% ( 9) 00:07:54.318 10636.997 - 10687.409: 97.4919% ( 11) 00:07:54.318 10687.409 - 10737.822: 97.5323% ( 7) 00:07:54.318 10737.822 - 10788.234: 97.5726% ( 7) 00:07:54.318 10788.234 - 10838.646: 97.6418% ( 12) 00:07:54.318 10838.646 - 10889.058: 97.6937% ( 9) 00:07:54.318 10889.058 - 10939.471: 97.7514% ( 10) 00:07:54.318 10939.471 - 10989.883: 97.8090% ( 10) 00:07:54.318 10989.883 - 11040.295: 97.8609% ( 9) 00:07:54.318 11040.295 - 11090.708: 97.8840% ( 4) 00:07:54.318 11090.708 - 11141.120: 97.9301% ( 8) 00:07:54.318 11141.120 - 11191.532: 97.9532% ( 4) 00:07:54.318 11191.532 - 11241.945: 98.0166% ( 11) 00:07:54.318 11241.945 - 11292.357: 98.0627% ( 8) 00:07:54.318 11292.357 - 11342.769: 98.0973% ( 6) 00:07:54.318 11342.769 - 11393.182: 98.1377% ( 7) 00:07:54.318 11393.182 - 11443.594: 98.1838% ( 8) 00:07:54.318 11443.594 - 11494.006: 98.2184% ( 6) 00:07:54.318 11494.006 - 11544.418: 98.2415% ( 4) 00:07:54.318 11544.418 - 11594.831: 98.2761% ( 6) 00:07:54.318 11594.831 - 11645.243: 98.2991% ( 4) 00:07:54.318 11645.243 - 11695.655: 98.3107% ( 2) 00:07:54.319 11695.655 - 11746.068: 98.3280% ( 3) 00:07:54.319 11746.068 - 11796.480: 98.3395% ( 2) 00:07:54.319 11796.480 - 11846.892: 98.3452% ( 1) 00:07:54.319 11846.892 - 11897.305: 98.3510% ( 1) 00:07:54.319 11897.305 - 11947.717: 98.3568% ( 1) 00:07:54.319 11947.717 - 11998.129: 98.3683% ( 2) 00:07:54.319 11998.129 - 12048.542: 98.3856% ( 3) 00:07:54.319 12048.542 - 12098.954: 98.4029% ( 3) 00:07:54.319 12098.954 - 12149.366: 98.4087% ( 1) 00:07:54.319 12149.366 - 12199.778: 98.4202% ( 2) 00:07:54.319 12199.778 - 12250.191: 98.4260% ( 1) 00:07:54.319 12250.191 - 12300.603: 98.4375% ( 2) 00:07:54.319 12300.603 - 12351.015: 98.4548% ( 3) 00:07:54.319 12351.015 - 12401.428: 98.4606% ( 1) 00:07:54.319 12401.428 - 12451.840: 98.4721% ( 2) 00:07:54.319 12451.840 - 12502.252: 98.4894% ( 3) 00:07:54.319 12502.252 - 12552.665: 98.5009% ( 2) 00:07:54.319 12552.665 - 12603.077: 98.5067% ( 1) 00:07:54.319 12603.077 - 12653.489: 98.5240% ( 3) 00:07:54.319 19156.677 - 19257.502: 98.5298% ( 1) 00:07:54.319 19257.502 - 19358.326: 98.6047% ( 13) 00:07:54.319 19358.326 - 19459.151: 98.6566% ( 9) 00:07:54.319 19459.151 - 19559.975: 98.7200% ( 11) 00:07:54.319 19559.975 - 19660.800: 98.7604% ( 7) 00:07:54.319 19660.800 - 19761.625: 98.9045% ( 25) 00:07:54.319 19761.625 - 19862.449: 98.9161% ( 2) 00:07:54.319 19862.449 - 19963.274: 98.9333% ( 3) 00:07:54.319 19963.274 - 20064.098: 98.9506% ( 3) 00:07:54.319 20064.098 - 20164.923: 98.9679% ( 3) 00:07:54.319 20164.923 - 20265.748: 98.9910% ( 4) 00:07:54.319 20265.748 - 20366.572: 99.0141% ( 4) 00:07:54.319 20366.572 - 20467.397: 99.0256% ( 2) 00:07:54.319 20467.397 - 20568.222: 99.0429% ( 3) 00:07:54.319 21979.766 - 22080.591: 99.0487% ( 1) 00:07:54.319 22080.591 - 22181.415: 99.1236% ( 13) 00:07:54.319 22181.415 - 22282.240: 99.2043% ( 14) 00:07:54.319 22282.240 - 22383.065: 99.2505% ( 8) 00:07:54.319 22483.889 - 22584.714: 99.2620% ( 2) 00:07:54.319 23290.486 - 23391.311: 99.2678% ( 1) 00:07:54.319 23391.311 - 23492.135: 99.4407% ( 30) 00:07:54.319 23492.135 - 23592.960: 99.4465% ( 1) 00:07:54.319 23592.960 - 23693.785: 99.4523% ( 1) 00:07:54.319 23794.609 - 23895.434: 99.4638% ( 2) 00:07:54.319 23895.434 - 23996.258: 99.4753% ( 2) 00:07:54.319 23996.258 - 24097.083: 99.4811% ( 1) 00:07:54.319 24903.680 - 25004.505: 99.5099% ( 5) 00:07:54.319 25004.505 - 25105.329: 99.5387% ( 5) 00:07:54.319 25105.329 - 25206.154: 99.5560% ( 3) 00:07:54.319 25206.154 - 25306.978: 99.5849% ( 5) 00:07:54.319 25306.978 - 25407.803: 99.6022% ( 3) 00:07:54.319 25407.803 - 25508.628: 99.6310% ( 5) 00:07:54.319 25508.628 - 25609.452: 99.6541% ( 4) 00:07:54.319 25609.452 - 25710.277: 99.6771% ( 4) 00:07:54.319 25710.277 - 25811.102: 99.7002% ( 4) 00:07:54.319 25811.102 - 26012.751: 99.7463% ( 8) 00:07:54.319 26012.751 - 26214.400: 99.7982% ( 9) 00:07:54.319 26214.400 - 26416.049: 99.8443% ( 8) 00:07:54.319 26416.049 - 26617.698: 99.8616% ( 3) 00:07:54.319 26617.698 - 26819.348: 99.9135% ( 9) 00:07:54.319 26819.348 - 27020.997: 99.9539% ( 7) 00:07:54.319 27020.997 - 27222.646: 100.0000% ( 8) 00:07:54.319 00:07:54.319 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.319 ============================================================================== 00:07:54.319 Range in us Cumulative IO count 00:07:54.319 5041.231 - 5066.437: 0.0115% ( 2) 00:07:54.319 5066.437 - 5091.643: 0.0231% ( 2) 00:07:54.319 5091.643 - 5116.849: 0.0346% ( 2) 00:07:54.319 5116.849 - 5142.055: 0.0461% ( 2) 00:07:54.319 5142.055 - 5167.262: 0.0577% ( 2) 00:07:54.319 5167.262 - 5192.468: 0.0692% ( 2) 00:07:54.319 5192.468 - 5217.674: 0.0923% ( 4) 00:07:54.319 5217.674 - 5242.880: 0.1211% ( 5) 00:07:54.319 5242.880 - 5268.086: 0.1730% ( 9) 00:07:54.319 5268.086 - 5293.292: 0.2537% ( 14) 00:07:54.319 5293.292 - 5318.498: 0.2768% ( 4) 00:07:54.319 5318.498 - 5343.705: 0.2998% ( 4) 00:07:54.319 5343.705 - 5368.911: 0.3229% ( 4) 00:07:54.319 5368.911 - 5394.117: 0.3459% ( 4) 00:07:54.319 5394.117 - 5419.323: 0.3575% ( 2) 00:07:54.319 5419.323 - 5444.529: 0.3690% ( 2) 00:07:54.319 5721.797 - 5747.003: 0.3748% ( 1) 00:07:54.319 5747.003 - 5772.209: 0.3921% ( 3) 00:07:54.319 5772.209 - 5797.415: 0.4151% ( 4) 00:07:54.319 5797.415 - 5822.622: 0.4440% ( 5) 00:07:54.319 5822.622 - 5847.828: 0.4670% ( 4) 00:07:54.319 5847.828 - 5873.034: 0.5074% ( 7) 00:07:54.319 5873.034 - 5898.240: 0.6515% ( 25) 00:07:54.319 5898.240 - 5923.446: 0.6976% ( 8) 00:07:54.319 5923.446 - 5948.652: 0.7611% ( 11) 00:07:54.319 5948.652 - 5973.858: 0.8360% ( 13) 00:07:54.319 5973.858 - 5999.065: 0.9340% ( 17) 00:07:54.319 5999.065 - 6024.271: 1.0494% ( 20) 00:07:54.319 6024.271 - 6049.477: 1.2569% ( 36) 00:07:54.319 6049.477 - 6074.683: 1.5394% ( 49) 00:07:54.319 6074.683 - 6099.889: 1.9719% ( 75) 00:07:54.319 6099.889 - 6125.095: 2.2717% ( 52) 00:07:54.319 6125.095 - 6150.302: 2.6868% ( 72) 00:07:54.319 6150.302 - 6175.508: 3.3960% ( 123) 00:07:54.319 6175.508 - 6200.714: 4.2320% ( 145) 00:07:54.319 6200.714 - 6225.920: 5.2064% ( 169) 00:07:54.319 6225.920 - 6251.126: 6.3192% ( 193) 00:07:54.319 6251.126 - 6276.332: 7.4781% ( 201) 00:07:54.319 6276.332 - 6301.538: 8.6428% ( 202) 00:07:54.319 6301.538 - 6326.745: 10.4762% ( 318) 00:07:54.319 6326.745 - 6351.951: 12.5634% ( 362) 00:07:54.319 6351.951 - 6377.157: 14.2701% ( 296) 00:07:54.319 6377.157 - 6402.363: 15.7576% ( 258) 00:07:54.319 6402.363 - 6427.569: 17.4297% ( 290) 00:07:54.319 6427.569 - 6452.775: 18.9864% ( 270) 00:07:54.319 6452.775 - 6503.188: 22.0076% ( 524) 00:07:54.319 6503.188 - 6553.600: 25.8072% ( 659) 00:07:54.319 6553.600 - 6604.012: 29.1398% ( 578) 00:07:54.319 6604.012 - 6654.425: 33.1181% ( 690) 00:07:54.319 6654.425 - 6704.837: 37.3905% ( 741) 00:07:54.319 6704.837 - 6755.249: 40.2906% ( 503) 00:07:54.319 6755.249 - 6805.662: 44.2862% ( 693) 00:07:54.319 6805.662 - 6856.074: 48.1780% ( 675) 00:07:54.319 6856.074 - 6906.486: 51.4184% ( 562) 00:07:54.319 6906.486 - 6956.898: 54.1917% ( 481) 00:07:54.319 6956.898 - 7007.311: 56.8496% ( 461) 00:07:54.319 7007.311 - 7057.723: 59.0118% ( 375) 00:07:54.319 7057.723 - 7108.135: 61.4161% ( 417) 00:07:54.319 7108.135 - 7158.548: 63.8953% ( 430) 00:07:54.319 7158.548 - 7208.960: 66.2246% ( 404) 00:07:54.319 7208.960 - 7259.372: 69.1478% ( 507) 00:07:54.319 7259.372 - 7309.785: 70.9237% ( 308) 00:07:54.319 7309.785 - 7360.197: 72.4400% ( 263) 00:07:54.320 7360.197 - 7410.609: 74.0487% ( 279) 00:07:54.320 7410.609 - 7461.022: 76.0148% ( 341) 00:07:54.320 7461.022 - 7511.434: 77.8598% ( 320) 00:07:54.320 7511.434 - 7561.846: 79.0187% ( 201) 00:07:54.320 7561.846 - 7612.258: 79.5837% ( 98) 00:07:54.320 7612.258 - 7662.671: 80.1315% ( 95) 00:07:54.320 7662.671 - 7713.083: 80.7426% ( 106) 00:07:54.320 7713.083 - 7763.495: 81.0482% ( 53) 00:07:54.320 7763.495 - 7813.908: 81.3423% ( 51) 00:07:54.320 7813.908 - 7864.320: 81.9131% ( 99) 00:07:54.320 7864.320 - 7914.732: 82.2763% ( 63) 00:07:54.320 7914.732 - 7965.145: 82.6972% ( 73) 00:07:54.320 7965.145 - 8015.557: 83.0028% ( 53) 00:07:54.320 8015.557 - 8065.969: 83.2968% ( 51) 00:07:54.320 8065.969 - 8116.382: 83.7696% ( 82) 00:07:54.320 8116.382 - 8166.794: 84.3750% ( 105) 00:07:54.320 8166.794 - 8217.206: 84.8651% ( 85) 00:07:54.320 8217.206 - 8267.618: 85.2802% ( 72) 00:07:54.320 8267.618 - 8318.031: 85.6723% ( 68) 00:07:54.320 8318.031 - 8368.443: 86.0586% ( 67) 00:07:54.320 8368.443 - 8418.855: 86.4103% ( 61) 00:07:54.320 8418.855 - 8469.268: 86.8196% ( 71) 00:07:54.320 8469.268 - 8519.680: 87.2578% ( 76) 00:07:54.320 8519.680 - 8570.092: 87.7076% ( 78) 00:07:54.320 8570.092 - 8620.505: 88.4167% ( 123) 00:07:54.320 8620.505 - 8670.917: 88.8722% ( 79) 00:07:54.320 8670.917 - 8721.329: 89.1778% ( 53) 00:07:54.320 8721.329 - 8771.742: 89.6275% ( 78) 00:07:54.320 8771.742 - 8822.154: 90.2272% ( 104) 00:07:54.320 8822.154 - 8872.566: 90.5962% ( 64) 00:07:54.320 8872.566 - 8922.978: 91.0574% ( 80) 00:07:54.320 8922.978 - 8973.391: 91.4091% ( 61) 00:07:54.320 8973.391 - 9023.803: 91.6974% ( 50) 00:07:54.320 9023.803 - 9074.215: 92.1760% ( 83) 00:07:54.320 9074.215 - 9124.628: 92.6718% ( 86) 00:07:54.320 9124.628 - 9175.040: 92.9659% ( 51) 00:07:54.320 9175.040 - 9225.452: 93.1619% ( 34) 00:07:54.320 9225.452 - 9275.865: 93.3868% ( 39) 00:07:54.320 9275.865 - 9326.277: 93.7846% ( 69) 00:07:54.320 9326.277 - 9376.689: 93.9403% ( 27) 00:07:54.320 9376.689 - 9427.102: 94.2516% ( 54) 00:07:54.320 9427.102 - 9477.514: 94.4246% ( 30) 00:07:54.320 9477.514 - 9527.926: 94.6898% ( 46) 00:07:54.320 9527.926 - 9578.338: 94.9954% ( 53) 00:07:54.320 9578.338 - 9628.751: 95.3356% ( 59) 00:07:54.320 9628.751 - 9679.163: 95.5316% ( 34) 00:07:54.320 9679.163 - 9729.575: 95.7276% ( 34) 00:07:54.320 9729.575 - 9779.988: 95.9698% ( 42) 00:07:54.320 9779.988 - 9830.400: 96.1255% ( 27) 00:07:54.320 9830.400 - 9880.812: 96.2408% ( 20) 00:07:54.320 9880.812 - 9931.225: 96.3907% ( 26) 00:07:54.320 9931.225 - 9981.637: 96.5118% ( 21) 00:07:54.320 9981.637 - 10032.049: 96.6040% ( 16) 00:07:54.320 10032.049 - 10082.462: 96.6559% ( 9) 00:07:54.320 10082.462 - 10132.874: 96.7136% ( 10) 00:07:54.320 10132.874 - 10183.286: 96.9673% ( 44) 00:07:54.320 10183.286 - 10233.698: 97.0076% ( 7) 00:07:54.320 10233.698 - 10284.111: 97.0883% ( 14) 00:07:54.320 10284.111 - 10334.523: 97.2844% ( 34) 00:07:54.320 10334.523 - 10384.935: 97.3132% ( 5) 00:07:54.320 10384.935 - 10435.348: 97.3536% ( 7) 00:07:54.320 10435.348 - 10485.760: 97.3939% ( 7) 00:07:54.320 10485.760 - 10536.172: 97.4343% ( 7) 00:07:54.320 10536.172 - 10586.585: 97.4689% ( 6) 00:07:54.320 10586.585 - 10636.997: 97.5035% ( 6) 00:07:54.320 10636.997 - 10687.409: 97.5092% ( 1) 00:07:54.320 10687.409 - 10737.822: 97.5265% ( 3) 00:07:54.320 10737.822 - 10788.234: 97.5438% ( 3) 00:07:54.320 10788.234 - 10838.646: 97.5554% ( 2) 00:07:54.320 10838.646 - 10889.058: 97.5611% ( 1) 00:07:54.320 10889.058 - 10939.471: 97.5842% ( 4) 00:07:54.320 10939.471 - 10989.883: 97.5957% ( 2) 00:07:54.320 10989.883 - 11040.295: 97.6361% ( 7) 00:07:54.320 11040.295 - 11090.708: 97.7053% ( 12) 00:07:54.320 11090.708 - 11141.120: 97.7168% ( 2) 00:07:54.320 11141.120 - 11191.532: 97.7226% ( 1) 00:07:54.320 11191.532 - 11241.945: 97.7341% ( 2) 00:07:54.320 11241.945 - 11292.357: 97.7399% ( 1) 00:07:54.320 11292.357 - 11342.769: 97.7571% ( 3) 00:07:54.320 11342.769 - 11393.182: 97.7744% ( 3) 00:07:54.320 11393.182 - 11443.594: 97.8090% ( 6) 00:07:54.320 11443.594 - 11494.006: 97.8782% ( 12) 00:07:54.320 11494.006 - 11544.418: 98.0108% ( 23) 00:07:54.320 11544.418 - 11594.831: 98.0743% ( 11) 00:07:54.320 11594.831 - 11645.243: 98.1089% ( 6) 00:07:54.320 11645.243 - 11695.655: 98.1319% ( 4) 00:07:54.320 11695.655 - 11746.068: 98.1607% ( 5) 00:07:54.320 11746.068 - 11796.480: 98.2011% ( 7) 00:07:54.320 11796.480 - 11846.892: 98.3798% ( 31) 00:07:54.320 11846.892 - 11897.305: 98.4202% ( 7) 00:07:54.320 11897.305 - 11947.717: 98.4548% ( 6) 00:07:54.320 11947.717 - 11998.129: 98.4836% ( 5) 00:07:54.320 11998.129 - 12048.542: 98.5182% ( 6) 00:07:54.320 12048.542 - 12098.954: 98.5240% ( 1) 00:07:54.320 18854.203 - 18955.028: 98.5413% ( 3) 00:07:54.320 18955.028 - 19055.852: 98.5932% ( 9) 00:07:54.320 19055.852 - 19156.677: 98.6393% ( 8) 00:07:54.320 19156.677 - 19257.502: 98.6624% ( 4) 00:07:54.320 19257.502 - 19358.326: 98.6797% ( 3) 00:07:54.320 19358.326 - 19459.151: 98.7027% ( 4) 00:07:54.320 19459.151 - 19559.975: 98.7315% ( 5) 00:07:54.320 19559.975 - 19660.800: 98.7604% ( 5) 00:07:54.320 19660.800 - 19761.625: 98.7892% ( 5) 00:07:54.320 19761.625 - 19862.449: 98.8123% ( 4) 00:07:54.320 19862.449 - 19963.274: 98.8469% ( 6) 00:07:54.320 19963.274 - 20064.098: 98.8699% ( 4) 00:07:54.320 20064.098 - 20164.923: 98.8930% ( 4) 00:07:54.320 22282.240 - 22383.065: 98.9218% ( 5) 00:07:54.320 22383.065 - 22483.889: 99.2620% ( 59) 00:07:54.320 22483.889 - 22584.714: 99.2678% ( 1) 00:07:54.320 22685.538 - 22786.363: 99.6195% ( 61) 00:07:54.320 22786.363 - 22887.188: 99.6310% ( 2) 00:07:54.320 24601.206 - 24702.031: 99.6368% ( 1) 00:07:54.320 24702.031 - 24802.855: 99.6656% ( 5) 00:07:54.320 24802.855 - 24903.680: 99.6944% ( 5) 00:07:54.320 24903.680 - 25004.505: 99.7232% ( 5) 00:07:54.320 25004.505 - 25105.329: 99.7521% ( 5) 00:07:54.320 25105.329 - 25206.154: 99.7809% ( 5) 00:07:54.320 25206.154 - 25306.978: 99.8097% ( 5) 00:07:54.320 25306.978 - 25407.803: 99.8386% ( 5) 00:07:54.320 25407.803 - 25508.628: 99.8674% ( 5) 00:07:54.320 25508.628 - 25609.452: 99.8905% ( 4) 00:07:54.320 25609.452 - 25710.277: 99.9193% ( 5) 00:07:54.320 25710.277 - 25811.102: 99.9481% ( 5) 00:07:54.320 25811.102 - 26012.751: 100.0000% ( 9) 00:07:54.320 00:07:54.320 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.320 ============================================================================== 00:07:54.320 Range in us Cumulative IO count 00:07:54.320 4083.397 - 4108.603: 0.0058% ( 1) 00:07:54.320 4285.046 - 4310.252: 0.0115% ( 1) 00:07:54.320 4310.252 - 4335.458: 0.0173% ( 1) 00:07:54.321 4335.458 - 4360.665: 0.0288% ( 2) 00:07:54.321 4360.665 - 4385.871: 0.0519% ( 4) 00:07:54.321 4385.871 - 4411.077: 0.0750% ( 4) 00:07:54.321 4411.077 - 4436.283: 0.1038% ( 5) 00:07:54.321 4436.283 - 4461.489: 0.1903% ( 15) 00:07:54.321 4461.489 - 4486.695: 0.2710% ( 14) 00:07:54.321 4486.695 - 4511.902: 0.2940% ( 4) 00:07:54.321 4511.902 - 4537.108: 0.3171% ( 4) 00:07:54.321 4537.108 - 4562.314: 0.3286% ( 2) 00:07:54.321 4562.314 - 4587.520: 0.3402% ( 2) 00:07:54.321 4587.520 - 4612.726: 0.3517% ( 2) 00:07:54.321 4612.726 - 4637.932: 0.3632% ( 2) 00:07:54.321 4637.932 - 4663.138: 0.3690% ( 1) 00:07:54.321 5671.385 - 5696.591: 0.3748% ( 1) 00:07:54.321 5696.591 - 5721.797: 0.3863% ( 2) 00:07:54.321 5721.797 - 5747.003: 0.4036% ( 3) 00:07:54.321 5747.003 - 5772.209: 0.4151% ( 2) 00:07:54.321 5772.209 - 5797.415: 0.4440% ( 5) 00:07:54.321 5797.415 - 5822.622: 0.5131% ( 12) 00:07:54.321 5822.622 - 5847.828: 0.5593% ( 8) 00:07:54.321 5847.828 - 5873.034: 0.6227% ( 11) 00:07:54.321 5873.034 - 5898.240: 0.7149% ( 16) 00:07:54.321 5898.240 - 5923.446: 0.8360% ( 21) 00:07:54.321 5923.446 - 5948.652: 0.9110% ( 13) 00:07:54.321 5948.652 - 5973.858: 1.0263% ( 20) 00:07:54.321 5973.858 - 5999.065: 1.3376% ( 54) 00:07:54.321 5999.065 - 6024.271: 1.4587% ( 21) 00:07:54.321 6024.271 - 6049.477: 1.5856% ( 22) 00:07:54.321 6049.477 - 6074.683: 1.7355% ( 26) 00:07:54.321 6074.683 - 6099.889: 2.0180% ( 49) 00:07:54.321 6099.889 - 6125.095: 2.4677% ( 78) 00:07:54.321 6125.095 - 6150.302: 2.9290% ( 80) 00:07:54.321 6150.302 - 6175.508: 3.7073% ( 135) 00:07:54.321 6175.508 - 6200.714: 4.4050% ( 121) 00:07:54.321 6200.714 - 6225.920: 5.2871% ( 153) 00:07:54.321 6225.920 - 6251.126: 6.3134% ( 178) 00:07:54.321 6251.126 - 6276.332: 8.1354% ( 316) 00:07:54.321 6276.332 - 6301.538: 9.7152% ( 274) 00:07:54.321 6301.538 - 6326.745: 11.1797% ( 254) 00:07:54.321 6326.745 - 6351.951: 12.9151% ( 301) 00:07:54.321 6351.951 - 6377.157: 15.3137% ( 416) 00:07:54.321 6377.157 - 6402.363: 17.1414% ( 317) 00:07:54.321 6402.363 - 6427.569: 18.4848% ( 233) 00:07:54.321 6427.569 - 6452.775: 19.9435% ( 253) 00:07:54.321 6452.775 - 6503.188: 22.4631% ( 437) 00:07:54.321 6503.188 - 6553.600: 26.4011% ( 683) 00:07:54.321 6553.600 - 6604.012: 29.9873% ( 622) 00:07:54.321 6604.012 - 6654.425: 33.3545% ( 584) 00:07:54.321 6654.425 - 6704.837: 36.5948% ( 562) 00:07:54.321 6704.837 - 6755.249: 40.2387% ( 632) 00:07:54.321 6755.249 - 6805.662: 44.3093% ( 706) 00:07:54.321 6805.662 - 6856.074: 47.9705% ( 635) 00:07:54.321 6856.074 - 6906.486: 50.8821% ( 505) 00:07:54.321 6906.486 - 6956.898: 54.4223% ( 614) 00:07:54.321 6956.898 - 7007.311: 57.1149% ( 467) 00:07:54.321 7007.311 - 7057.723: 59.7671% ( 460) 00:07:54.321 7057.723 - 7108.135: 62.2175% ( 425) 00:07:54.321 7108.135 - 7158.548: 64.3796% ( 375) 00:07:54.321 7158.548 - 7208.960: 66.8243% ( 424) 00:07:54.321 7208.960 - 7259.372: 68.4617% ( 284) 00:07:54.321 7259.372 - 7309.785: 70.6642% ( 382) 00:07:54.321 7309.785 - 7360.197: 72.7399% ( 360) 00:07:54.321 7360.197 - 7410.609: 74.0025% ( 219) 00:07:54.321 7410.609 - 7461.022: 75.5535% ( 269) 00:07:54.321 7461.022 - 7511.434: 77.1852% ( 283) 00:07:54.321 7511.434 - 7561.846: 78.5055% ( 229) 00:07:54.321 7561.846 - 7612.258: 79.3877% ( 153) 00:07:54.321 7612.258 - 7662.671: 80.0623% ( 117) 00:07:54.321 7662.671 - 7713.083: 80.6619% ( 104) 00:07:54.321 7713.083 - 7763.495: 81.1405% ( 83) 00:07:54.321 7763.495 - 7813.908: 81.5325% ( 68) 00:07:54.321 7813.908 - 7864.320: 81.9188% ( 67) 00:07:54.321 7864.320 - 7914.732: 82.2475% ( 57) 00:07:54.321 7914.732 - 7965.145: 82.6222% ( 65) 00:07:54.321 7965.145 - 8015.557: 83.0085% ( 67) 00:07:54.321 8015.557 - 8065.969: 83.4640% ( 79) 00:07:54.321 8065.969 - 8116.382: 84.0637% ( 104) 00:07:54.321 8116.382 - 8166.794: 84.4384% ( 65) 00:07:54.321 8166.794 - 8217.206: 85.0035% ( 98) 00:07:54.321 8217.206 - 8267.618: 85.3206% ( 55) 00:07:54.321 8267.618 - 8318.031: 85.7184% ( 69) 00:07:54.321 8318.031 - 8368.443: 86.0874% ( 64) 00:07:54.321 8368.443 - 8418.855: 86.3815% ( 51) 00:07:54.321 8418.855 - 8469.268: 86.8485% ( 81) 00:07:54.321 8469.268 - 8519.680: 87.1541% ( 53) 00:07:54.321 8519.680 - 8570.092: 87.5346% ( 66) 00:07:54.321 8570.092 - 8620.505: 88.1169% ( 101) 00:07:54.321 8620.505 - 8670.917: 88.8722% ( 131) 00:07:54.321 8670.917 - 8721.329: 89.3220% ( 78) 00:07:54.321 8721.329 - 8771.742: 89.7371% ( 72) 00:07:54.321 8771.742 - 8822.154: 90.1292% ( 68) 00:07:54.321 8822.154 - 8872.566: 90.6365% ( 88) 00:07:54.321 8872.566 - 8922.978: 91.0459% ( 71) 00:07:54.321 8922.978 - 8973.391: 91.5475% ( 87) 00:07:54.321 8973.391 - 9023.803: 91.9165% ( 64) 00:07:54.321 9023.803 - 9074.215: 92.2163% ( 52) 00:07:54.321 9074.215 - 9124.628: 92.6199% ( 70) 00:07:54.321 9124.628 - 9175.040: 92.9370% ( 55) 00:07:54.321 9175.040 - 9225.452: 93.1965% ( 45) 00:07:54.321 9225.452 - 9275.865: 93.5021% ( 53) 00:07:54.321 9275.865 - 9326.277: 93.8365% ( 58) 00:07:54.321 9326.277 - 9376.689: 94.0729% ( 41) 00:07:54.321 9376.689 - 9427.102: 94.3842% ( 54) 00:07:54.321 9427.102 - 9477.514: 94.6091% ( 39) 00:07:54.321 9477.514 - 9527.926: 94.7648% ( 27) 00:07:54.321 9527.926 - 9578.338: 94.9031% ( 24) 00:07:54.321 9578.338 - 9628.751: 95.0703% ( 29) 00:07:54.321 9628.751 - 9679.163: 95.2375% ( 29) 00:07:54.321 9679.163 - 9729.575: 95.6123% ( 65) 00:07:54.321 9729.575 - 9779.988: 95.8718% ( 45) 00:07:54.321 9779.988 - 9830.400: 96.0447% ( 30) 00:07:54.321 9830.400 - 9880.812: 96.2696% ( 39) 00:07:54.321 9880.812 - 9931.225: 96.4310% ( 28) 00:07:54.321 9931.225 - 9981.637: 96.5406% ( 19) 00:07:54.321 9981.637 - 10032.049: 96.6271% ( 15) 00:07:54.321 10032.049 - 10082.462: 96.7712% ( 25) 00:07:54.321 10082.462 - 10132.874: 96.9269% ( 27) 00:07:54.321 10132.874 - 10183.286: 97.0364% ( 19) 00:07:54.321 10183.286 - 10233.698: 97.1863% ( 26) 00:07:54.321 10233.698 - 10284.111: 97.2382% ( 9) 00:07:54.321 10284.111 - 10334.523: 97.2786% ( 7) 00:07:54.321 10334.523 - 10384.935: 97.3132% ( 6) 00:07:54.321 10384.935 - 10435.348: 97.3478% ( 6) 00:07:54.321 10435.348 - 10485.760: 97.3939% ( 8) 00:07:54.321 10485.760 - 10536.172: 97.4343% ( 7) 00:07:54.321 10536.172 - 10586.585: 97.4746% ( 7) 00:07:54.321 10586.585 - 10636.997: 97.4919% ( 3) 00:07:54.321 10636.997 - 10687.409: 97.5092% ( 3) 00:07:54.321 10687.409 - 10737.822: 97.5265% ( 3) 00:07:54.321 10737.822 - 10788.234: 97.5381% ( 2) 00:07:54.321 10788.234 - 10838.646: 97.6534% ( 20) 00:07:54.321 10838.646 - 10889.058: 97.7226% ( 12) 00:07:54.321 10889.058 - 10939.471: 97.7341% ( 2) 00:07:54.321 10939.471 - 10989.883: 97.7456% ( 2) 00:07:54.321 10989.883 - 11040.295: 97.7514% ( 1) 00:07:54.321 11040.295 - 11090.708: 97.7687% ( 3) 00:07:54.321 11090.708 - 11141.120: 97.8090% ( 7) 00:07:54.321 11141.120 - 11191.532: 97.8725% ( 11) 00:07:54.321 11191.532 - 11241.945: 97.9935% ( 21) 00:07:54.321 11241.945 - 11292.357: 98.1146% ( 21) 00:07:54.321 11292.357 - 11342.769: 98.2357% ( 21) 00:07:54.321 11342.769 - 11393.182: 98.2703% ( 6) 00:07:54.321 11393.182 - 11443.594: 98.2934% ( 4) 00:07:54.321 11443.594 - 11494.006: 98.3164% ( 4) 00:07:54.321 11494.006 - 11544.418: 98.3452% ( 5) 00:07:54.321 11544.418 - 11594.831: 98.3741% ( 5) 00:07:54.322 11594.831 - 11645.243: 98.4029% ( 5) 00:07:54.322 11645.243 - 11695.655: 98.4202% ( 3) 00:07:54.322 11695.655 - 11746.068: 98.4548% ( 6) 00:07:54.322 11746.068 - 11796.480: 98.4663% ( 2) 00:07:54.322 11796.480 - 11846.892: 98.4836% ( 3) 00:07:54.322 11846.892 - 11897.305: 98.5009% ( 3) 00:07:54.322 11897.305 - 11947.717: 98.5125% ( 2) 00:07:54.322 11947.717 - 11998.129: 98.5240% ( 2) 00:07:54.322 18955.028 - 19055.852: 98.5413% ( 3) 00:07:54.322 19055.852 - 19156.677: 98.5874% ( 8) 00:07:54.322 19156.677 - 19257.502: 98.6220% ( 6) 00:07:54.322 19257.502 - 19358.326: 98.6624% ( 7) 00:07:54.322 19358.326 - 19459.151: 98.6970% ( 6) 00:07:54.322 19459.151 - 19559.975: 98.7373% ( 7) 00:07:54.322 19559.975 - 19660.800: 98.7719% ( 6) 00:07:54.322 19660.800 - 19761.625: 98.7892% ( 3) 00:07:54.322 19761.625 - 19862.449: 98.8065% ( 3) 00:07:54.322 19862.449 - 19963.274: 98.8296% ( 4) 00:07:54.322 19963.274 - 20064.098: 98.8526% ( 4) 00:07:54.322 20064.098 - 20164.923: 98.8757% ( 4) 00:07:54.322 20164.923 - 20265.748: 98.8930% ( 3) 00:07:54.322 22988.012 - 23088.837: 98.9506% ( 10) 00:07:54.322 23088.837 - 23189.662: 99.2678% ( 55) 00:07:54.322 23189.662 - 23290.486: 99.2851% ( 3) 00:07:54.322 23290.486 - 23391.311: 99.6310% ( 60) 00:07:54.322 24097.083 - 24197.908: 99.6368% ( 1) 00:07:54.322 24197.908 - 24298.732: 99.6714% ( 6) 00:07:54.322 24298.732 - 24399.557: 99.6944% ( 4) 00:07:54.322 24399.557 - 24500.382: 99.7175% ( 4) 00:07:54.322 24802.855 - 24903.680: 99.7405% ( 4) 00:07:54.322 24903.680 - 25004.505: 99.7578% ( 3) 00:07:54.322 25004.505 - 25105.329: 99.7982% ( 7) 00:07:54.322 25105.329 - 25206.154: 99.8270% ( 5) 00:07:54.322 25206.154 - 25306.978: 99.8616% ( 6) 00:07:54.322 25306.978 - 25407.803: 99.8962% ( 6) 00:07:54.322 25407.803 - 25508.628: 99.9135% ( 3) 00:07:54.322 25508.628 - 25609.452: 99.9423% ( 5) 00:07:54.322 25609.452 - 25710.277: 99.9654% ( 4) 00:07:54.322 25710.277 - 25811.102: 99.9885% ( 4) 00:07:54.322 25811.102 - 26012.751: 100.0000% ( 2) 00:07:54.322 00:07:54.322 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.322 ============================================================================== 00:07:54.322 Range in us Cumulative IO count 00:07:54.322 3982.572 - 4007.778: 0.0058% ( 1) 00:07:54.322 4108.603 - 4133.809: 0.0173% ( 2) 00:07:54.322 4133.809 - 4159.015: 0.0346% ( 3) 00:07:54.322 4159.015 - 4184.222: 0.0461% ( 2) 00:07:54.322 4184.222 - 4209.428: 0.0577% ( 2) 00:07:54.322 4209.428 - 4234.634: 0.0750% ( 3) 00:07:54.322 4234.634 - 4259.840: 0.0980% ( 4) 00:07:54.322 4259.840 - 4285.046: 0.1326% ( 6) 00:07:54.322 4285.046 - 4310.252: 0.2364% ( 18) 00:07:54.322 4310.252 - 4335.458: 0.2710% ( 6) 00:07:54.322 4335.458 - 4360.665: 0.2883% ( 3) 00:07:54.322 4360.665 - 4385.871: 0.3113% ( 4) 00:07:54.322 4385.871 - 4411.077: 0.3402% ( 5) 00:07:54.322 4411.077 - 4436.283: 0.3459% ( 1) 00:07:54.322 4436.283 - 4461.489: 0.3575% ( 2) 00:07:54.322 4461.489 - 4486.695: 0.3690% ( 2) 00:07:54.322 5696.591 - 5721.797: 0.3748% ( 1) 00:07:54.322 5721.797 - 5747.003: 0.4036% ( 5) 00:07:54.322 5747.003 - 5772.209: 0.4497% ( 8) 00:07:54.322 5772.209 - 5797.415: 0.5074% ( 10) 00:07:54.322 5797.415 - 5822.622: 0.5535% ( 8) 00:07:54.322 5822.622 - 5847.828: 0.5996% ( 8) 00:07:54.322 5847.828 - 5873.034: 0.6573% ( 10) 00:07:54.322 5873.034 - 5898.240: 0.7380% ( 14) 00:07:54.322 5898.240 - 5923.446: 0.9629% ( 39) 00:07:54.322 5923.446 - 5948.652: 1.2166% ( 44) 00:07:54.322 5948.652 - 5973.858: 1.3665% ( 26) 00:07:54.322 5973.858 - 5999.065: 1.5106% ( 25) 00:07:54.322 5999.065 - 6024.271: 1.6548% ( 25) 00:07:54.322 6024.271 - 6049.477: 1.8047% ( 26) 00:07:54.322 6049.477 - 6074.683: 1.9488% ( 25) 00:07:54.322 6074.683 - 6099.889: 2.2140% ( 46) 00:07:54.322 6099.889 - 6125.095: 2.5484% ( 58) 00:07:54.322 6125.095 - 6150.302: 2.9174% ( 64) 00:07:54.322 6150.302 - 6175.508: 3.6727% ( 131) 00:07:54.322 6175.508 - 6200.714: 4.3185% ( 112) 00:07:54.322 6200.714 - 6225.920: 5.0796% ( 132) 00:07:54.322 6225.920 - 6251.126: 6.2558% ( 204) 00:07:54.322 6251.126 - 6276.332: 7.3628% ( 192) 00:07:54.322 6276.332 - 6301.538: 9.2770% ( 332) 00:07:54.322 6301.538 - 6326.745: 11.1278% ( 321) 00:07:54.322 6326.745 - 6351.951: 12.9901% ( 323) 00:07:54.322 6351.951 - 6377.157: 15.1983% ( 383) 00:07:54.322 6377.157 - 6402.363: 17.2740% ( 360) 00:07:54.322 6402.363 - 6427.569: 18.7788% ( 261) 00:07:54.322 6427.569 - 6452.775: 19.9204% ( 198) 00:07:54.322 6452.775 - 6503.188: 22.8782% ( 513) 00:07:54.322 6503.188 - 6553.600: 26.5279% ( 633) 00:07:54.322 6553.600 - 6604.012: 30.5178% ( 692) 00:07:54.322 6604.012 - 6654.425: 34.2136% ( 641) 00:07:54.322 6654.425 - 6704.837: 37.3270% ( 540) 00:07:54.322 6704.837 - 6755.249: 40.6654% ( 579) 00:07:54.322 6755.249 - 6805.662: 44.5457% ( 673) 00:07:54.322 6805.662 - 6856.074: 47.7226% ( 551) 00:07:54.322 6856.074 - 6906.486: 50.7611% ( 527) 00:07:54.322 6906.486 - 6956.898: 53.6381% ( 499) 00:07:54.322 6956.898 - 7007.311: 56.3365% ( 468) 00:07:54.322 7007.311 - 7057.723: 59.1847% ( 494) 00:07:54.322 7057.723 - 7108.135: 61.1393% ( 339) 00:07:54.322 7108.135 - 7158.548: 63.0996% ( 340) 00:07:54.322 7158.548 - 7208.960: 65.4117% ( 401) 00:07:54.322 7208.960 - 7259.372: 67.6488% ( 388) 00:07:54.322 7259.372 - 7309.785: 69.6552% ( 348) 00:07:54.322 7309.785 - 7360.197: 71.8289% ( 377) 00:07:54.322 7360.197 - 7410.609: 73.3280% ( 260) 00:07:54.323 7410.609 - 7461.022: 74.6541% ( 230) 00:07:54.323 7461.022 - 7511.434: 75.9859% ( 231) 00:07:54.323 7511.434 - 7561.846: 77.3293% ( 233) 00:07:54.323 7561.846 - 7612.258: 78.3729% ( 181) 00:07:54.323 7612.258 - 7662.671: 79.0763% ( 122) 00:07:54.323 7662.671 - 7713.083: 79.9124% ( 145) 00:07:54.323 7713.083 - 7763.495: 80.5466% ( 110) 00:07:54.323 7763.495 - 7813.908: 81.3826% ( 145) 00:07:54.323 7813.908 - 7864.320: 82.0226% ( 111) 00:07:54.323 7864.320 - 7914.732: 82.5357% ( 89) 00:07:54.323 7914.732 - 7965.145: 83.0258% ( 85) 00:07:54.323 7965.145 - 8015.557: 83.5044% ( 83) 00:07:54.323 8015.557 - 8065.969: 83.9599% ( 79) 00:07:54.323 8065.969 - 8116.382: 84.4615% ( 87) 00:07:54.323 8116.382 - 8166.794: 84.9516% ( 85) 00:07:54.323 8166.794 - 8217.206: 85.4647% ( 89) 00:07:54.323 8217.206 - 8267.618: 85.9144% ( 78) 00:07:54.323 8267.618 - 8318.031: 86.3238% ( 71) 00:07:54.323 8318.031 - 8368.443: 87.0618% ( 128) 00:07:54.323 8368.443 - 8418.855: 87.7710% ( 123) 00:07:54.323 8418.855 - 8469.268: 88.2380% ( 81) 00:07:54.323 8469.268 - 8519.680: 88.5782% ( 59) 00:07:54.323 8519.680 - 8570.092: 88.9241% ( 60) 00:07:54.323 8570.092 - 8620.505: 89.2931% ( 64) 00:07:54.323 8620.505 - 8670.917: 89.6794% ( 67) 00:07:54.323 8670.917 - 8721.329: 89.9677% ( 50) 00:07:54.323 8721.329 - 8771.742: 90.2156% ( 43) 00:07:54.323 8771.742 - 8822.154: 90.5616% ( 60) 00:07:54.323 8822.154 - 8872.566: 91.1093% ( 95) 00:07:54.323 8872.566 - 8922.978: 91.4437% ( 58) 00:07:54.323 8922.978 - 8973.391: 91.9511% ( 88) 00:07:54.323 8973.391 - 9023.803: 92.4124% ( 80) 00:07:54.323 9023.803 - 9074.215: 92.7929% ( 66) 00:07:54.323 9074.215 - 9124.628: 93.1504% ( 62) 00:07:54.323 9124.628 - 9175.040: 93.3349% ( 32) 00:07:54.323 9175.040 - 9225.452: 93.4790% ( 25) 00:07:54.323 9225.452 - 9275.865: 93.6866% ( 36) 00:07:54.323 9275.865 - 9326.277: 93.9864% ( 52) 00:07:54.323 9326.277 - 9376.689: 94.2574% ( 47) 00:07:54.323 9376.689 - 9427.102: 94.4304% ( 30) 00:07:54.323 9427.102 - 9477.514: 94.7417% ( 54) 00:07:54.323 9477.514 - 9527.926: 95.0012% ( 45) 00:07:54.323 9527.926 - 9578.338: 95.1857% ( 32) 00:07:54.323 9578.338 - 9628.751: 95.2721% ( 15) 00:07:54.323 9628.751 - 9679.163: 95.3932% ( 21) 00:07:54.323 9679.163 - 9729.575: 95.5316% ( 24) 00:07:54.323 9729.575 - 9779.988: 95.6700% ( 24) 00:07:54.323 9779.988 - 9830.400: 95.8026% ( 23) 00:07:54.323 9830.400 - 9880.812: 95.9583% ( 27) 00:07:54.323 9880.812 - 9931.225: 96.0966% ( 24) 00:07:54.323 9931.225 - 9981.637: 96.3388% ( 42) 00:07:54.323 9981.637 - 10032.049: 96.4772% ( 24) 00:07:54.323 10032.049 - 10082.462: 96.5579% ( 14) 00:07:54.323 10082.462 - 10132.874: 96.7136% ( 27) 00:07:54.323 10132.874 - 10183.286: 97.0134% ( 52) 00:07:54.323 10183.286 - 10233.698: 97.1229% ( 19) 00:07:54.323 10233.698 - 10284.111: 97.2036% ( 14) 00:07:54.323 10284.111 - 10334.523: 97.2786% ( 13) 00:07:54.323 10334.523 - 10384.935: 97.3190% ( 7) 00:07:54.323 10384.935 - 10435.348: 97.3536% ( 6) 00:07:54.323 10435.348 - 10485.760: 97.3824% ( 5) 00:07:54.323 10485.760 - 10536.172: 97.4227% ( 7) 00:07:54.323 10536.172 - 10586.585: 97.4516% ( 5) 00:07:54.323 10586.585 - 10636.997: 97.4689% ( 3) 00:07:54.323 10636.997 - 10687.409: 97.4862% ( 3) 00:07:54.323 10687.409 - 10737.822: 97.5092% ( 4) 00:07:54.323 10737.822 - 10788.234: 97.5265% ( 3) 00:07:54.323 10788.234 - 10838.646: 97.5726% ( 8) 00:07:54.323 10838.646 - 10889.058: 97.6937% ( 21) 00:07:54.323 10889.058 - 10939.471: 97.9128% ( 38) 00:07:54.323 10939.471 - 10989.883: 98.0627% ( 26) 00:07:54.323 10989.883 - 11040.295: 98.1146% ( 9) 00:07:54.323 11040.295 - 11090.708: 98.1723% ( 10) 00:07:54.323 11090.708 - 11141.120: 98.2934% ( 21) 00:07:54.323 11141.120 - 11191.532: 98.3337% ( 7) 00:07:54.323 11191.532 - 11241.945: 98.3741% ( 7) 00:07:54.323 11241.945 - 11292.357: 98.4029% ( 5) 00:07:54.323 11292.357 - 11342.769: 98.4433% ( 7) 00:07:54.323 11342.769 - 11393.182: 98.4606% ( 3) 00:07:54.323 11393.182 - 11443.594: 98.4894% ( 5) 00:07:54.323 11443.594 - 11494.006: 98.5125% ( 4) 00:07:54.323 11494.006 - 11544.418: 98.5240% ( 2) 00:07:54.323 18652.554 - 18753.378: 98.5298% ( 1) 00:07:54.323 18753.378 - 18854.203: 98.5413% ( 2) 00:07:54.323 18854.203 - 18955.028: 98.5874% ( 8) 00:07:54.323 18955.028 - 19055.852: 98.6335% ( 8) 00:07:54.323 19055.852 - 19156.677: 98.6797% ( 8) 00:07:54.323 19156.677 - 19257.502: 98.7431% ( 11) 00:07:54.323 19257.502 - 19358.326: 98.7834% ( 7) 00:07:54.323 19358.326 - 19459.151: 98.8065% ( 4) 00:07:54.323 19459.151 - 19559.975: 98.8238% ( 3) 00:07:54.323 19559.975 - 19660.800: 98.8411% ( 3) 00:07:54.323 19660.800 - 19761.625: 98.8642% ( 4) 00:07:54.323 19761.625 - 19862.449: 98.8815% ( 3) 00:07:54.323 19862.449 - 19963.274: 98.8930% ( 2) 00:07:54.323 23189.662 - 23290.486: 98.9910% ( 17) 00:07:54.323 23290.486 - 23391.311: 99.2389% ( 43) 00:07:54.323 23391.311 - 23492.135: 99.2908% ( 9) 00:07:54.323 23492.135 - 23592.960: 99.5849% ( 51) 00:07:54.323 23592.960 - 23693.785: 99.6252% ( 7) 00:07:54.323 23693.785 - 23794.609: 99.6310% ( 1) 00:07:54.323 23895.434 - 23996.258: 99.6598% ( 5) 00:07:54.323 23996.258 - 24097.083: 99.6887% ( 5) 00:07:54.323 24097.083 - 24197.908: 99.7117% ( 4) 00:07:54.323 24197.908 - 24298.732: 99.7290% ( 3) 00:07:54.323 24298.732 - 24399.557: 99.7348% ( 1) 00:07:54.323 24399.557 - 24500.382: 99.7463% ( 2) 00:07:54.323 24500.382 - 24601.206: 99.7636% ( 3) 00:07:54.323 24601.206 - 24702.031: 99.7867% ( 4) 00:07:54.323 24702.031 - 24802.855: 99.8097% ( 4) 00:07:54.323 24802.855 - 24903.680: 99.8270% ( 3) 00:07:54.323 24903.680 - 25004.505: 99.8501% ( 4) 00:07:54.323 25004.505 - 25105.329: 99.8674% ( 3) 00:07:54.323 25105.329 - 25206.154: 99.8905% ( 4) 00:07:54.323 25206.154 - 25306.978: 99.9135% ( 4) 00:07:54.323 25306.978 - 25407.803: 99.9423% ( 5) 00:07:54.323 25407.803 - 25508.628: 99.9712% ( 5) 00:07:54.323 25508.628 - 25609.452: 99.9942% ( 4) 00:07:54.323 25609.452 - 25710.277: 100.0000% ( 1) 00:07:54.323 00:07:54.323 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.323 ============================================================================== 00:07:54.323 Range in us Cumulative IO count 00:07:54.323 4032.985 - 4058.191: 0.0058% ( 1) 00:07:54.323 4058.191 - 4083.397: 0.0231% ( 3) 00:07:54.323 4083.397 - 4108.603: 0.0980% ( 13) 00:07:54.323 4108.603 - 4133.809: 0.2076% ( 19) 00:07:54.323 4133.809 - 4159.015: 0.2825% ( 13) 00:07:54.323 4159.015 - 4184.222: 0.2940% ( 2) 00:07:54.323 4184.222 - 4209.428: 0.3056% ( 2) 00:07:54.323 4209.428 - 4234.634: 0.3171% ( 2) 00:07:54.323 4234.634 - 4259.840: 0.3286% ( 2) 00:07:54.323 4259.840 - 4285.046: 0.3344% ( 1) 00:07:54.323 4285.046 - 4310.252: 0.3402% ( 1) 00:07:54.323 4310.252 - 4335.458: 0.3517% ( 2) 00:07:54.323 4335.458 - 4360.665: 0.3575% ( 1) 00:07:54.324 4461.489 - 4486.695: 0.3632% ( 1) 00:07:54.324 4486.695 - 4511.902: 0.3690% ( 1) 00:07:54.324 5696.591 - 5721.797: 0.3863% ( 3) 00:07:54.324 5721.797 - 5747.003: 0.3921% ( 1) 00:07:54.324 5747.003 - 5772.209: 0.4094% ( 3) 00:07:54.324 5772.209 - 5797.415: 0.4267% ( 3) 00:07:54.324 5797.415 - 5822.622: 0.4497% ( 4) 00:07:54.324 5822.622 - 5847.828: 0.4786% ( 5) 00:07:54.324 5847.828 - 5873.034: 0.5420% ( 11) 00:07:54.324 5873.034 - 5898.240: 0.6285% ( 15) 00:07:54.324 5898.240 - 5923.446: 0.9225% ( 51) 00:07:54.324 5923.446 - 5948.652: 1.0955% ( 30) 00:07:54.324 5948.652 - 5973.858: 1.2050% ( 19) 00:07:54.324 5973.858 - 5999.065: 1.3319% ( 22) 00:07:54.324 5999.065 - 6024.271: 1.4702% ( 24) 00:07:54.324 6024.271 - 6049.477: 1.6490% ( 31) 00:07:54.324 6049.477 - 6074.683: 1.9200% ( 47) 00:07:54.324 6074.683 - 6099.889: 2.2140% ( 51) 00:07:54.324 6099.889 - 6125.095: 2.6868% ( 82) 00:07:54.324 6125.095 - 6150.302: 3.1308% ( 77) 00:07:54.324 6150.302 - 6175.508: 3.8688% ( 128) 00:07:54.324 6175.508 - 6200.714: 4.7106% ( 146) 00:07:54.324 6200.714 - 6225.920: 5.6388% ( 161) 00:07:54.324 6225.920 - 6251.126: 6.6997% ( 184) 00:07:54.324 6251.126 - 6276.332: 8.1873% ( 258) 00:07:54.324 6276.332 - 6301.538: 9.5710% ( 240) 00:07:54.324 6301.538 - 6326.745: 11.1220% ( 269) 00:07:54.324 6326.745 - 6351.951: 13.0131% ( 328) 00:07:54.324 6351.951 - 6377.157: 14.8812% ( 324) 00:07:54.324 6377.157 - 6402.363: 16.6109% ( 300) 00:07:54.324 6402.363 - 6427.569: 18.5943% ( 344) 00:07:54.324 6427.569 - 6452.775: 19.7129% ( 194) 00:07:54.324 6452.775 - 6503.188: 22.7571% ( 528) 00:07:54.324 6503.188 - 6553.600: 26.1531% ( 589) 00:07:54.324 6553.600 - 6604.012: 29.6644% ( 609) 00:07:54.324 6604.012 - 6654.425: 33.6082% ( 684) 00:07:54.324 6654.425 - 6704.837: 36.9292% ( 576) 00:07:54.324 6704.837 - 6755.249: 40.7807% ( 668) 00:07:54.324 6755.249 - 6805.662: 44.2631% ( 604) 00:07:54.324 6805.662 - 6856.074: 47.6072% ( 580) 00:07:54.324 6856.074 - 6906.486: 50.5766% ( 515) 00:07:54.324 6906.486 - 6956.898: 53.5113% ( 509) 00:07:54.324 6956.898 - 7007.311: 56.5152% ( 521) 00:07:54.324 7007.311 - 7057.723: 58.7235% ( 383) 00:07:54.324 7057.723 - 7108.135: 60.9894% ( 393) 00:07:54.324 7108.135 - 7158.548: 63.2899% ( 399) 00:07:54.324 7158.548 - 7208.960: 65.5846% ( 398) 00:07:54.324 7208.960 - 7259.372: 67.7814% ( 381) 00:07:54.324 7259.372 - 7309.785: 70.2030% ( 420) 00:07:54.324 7309.785 - 7360.197: 72.0537% ( 321) 00:07:54.324 7360.197 - 7410.609: 73.9218% ( 324) 00:07:54.324 7410.609 - 7461.022: 75.0865% ( 202) 00:07:54.324 7461.022 - 7511.434: 76.4587% ( 238) 00:07:54.324 7511.434 - 7561.846: 77.4562% ( 173) 00:07:54.324 7561.846 - 7612.258: 78.1308% ( 117) 00:07:54.324 7612.258 - 7662.671: 78.6554% ( 91) 00:07:54.324 7662.671 - 7713.083: 79.2897% ( 110) 00:07:54.324 7713.083 - 7763.495: 80.1661% ( 152) 00:07:54.324 7763.495 - 7813.908: 80.6792% ( 89) 00:07:54.324 7813.908 - 7864.320: 81.4691% ( 137) 00:07:54.324 7864.320 - 7914.732: 82.1379% ( 116) 00:07:54.324 7914.732 - 7965.145: 82.7606% ( 108) 00:07:54.324 7965.145 - 8015.557: 83.3718% ( 106) 00:07:54.324 8015.557 - 8065.969: 84.2597% ( 154) 00:07:54.324 8065.969 - 8116.382: 84.8305% ( 99) 00:07:54.324 8116.382 - 8166.794: 85.4071% ( 100) 00:07:54.324 8166.794 - 8217.206: 85.9317% ( 91) 00:07:54.324 8217.206 - 8267.618: 86.4333% ( 87) 00:07:54.324 8267.618 - 8318.031: 87.0042% ( 99) 00:07:54.324 8318.031 - 8368.443: 87.4077% ( 70) 00:07:54.324 8368.443 - 8418.855: 88.0247% ( 107) 00:07:54.324 8418.855 - 8469.268: 88.6531% ( 109) 00:07:54.324 8469.268 - 8519.680: 89.1490% ( 86) 00:07:54.324 8519.680 - 8570.092: 89.5526% ( 70) 00:07:54.324 8570.092 - 8620.505: 89.8870% ( 58) 00:07:54.324 8620.505 - 8670.917: 90.2214% ( 58) 00:07:54.324 8670.917 - 8721.329: 90.6423% ( 73) 00:07:54.324 8721.329 - 8771.742: 90.9363% ( 51) 00:07:54.324 8771.742 - 8822.154: 91.1670% ( 40) 00:07:54.324 8822.154 - 8872.566: 91.4264% ( 45) 00:07:54.324 8872.566 - 8922.978: 91.9742% ( 95) 00:07:54.324 8922.978 - 8973.391: 92.2913% ( 55) 00:07:54.324 8973.391 - 9023.803: 92.8333% ( 94) 00:07:54.324 9023.803 - 9074.215: 93.1100% ( 48) 00:07:54.324 9074.215 - 9124.628: 93.3752% ( 46) 00:07:54.324 9124.628 - 9175.040: 93.7039% ( 57) 00:07:54.324 9175.040 - 9225.452: 93.8999% ( 34) 00:07:54.324 9225.452 - 9275.865: 94.1075% ( 36) 00:07:54.324 9275.865 - 9326.277: 94.3612% ( 44) 00:07:54.324 9326.277 - 9376.689: 94.6437% ( 49) 00:07:54.324 9376.689 - 9427.102: 94.7590% ( 20) 00:07:54.324 9427.102 - 9477.514: 94.8570% ( 17) 00:07:54.324 9477.514 - 9527.926: 95.0127% ( 27) 00:07:54.324 9527.926 - 9578.338: 95.1799% ( 29) 00:07:54.324 9578.338 - 9628.751: 95.3529% ( 30) 00:07:54.324 9628.751 - 9679.163: 95.4509% ( 17) 00:07:54.324 9679.163 - 9729.575: 95.5201% ( 12) 00:07:54.324 9729.575 - 9779.988: 95.5950% ( 13) 00:07:54.324 9779.988 - 9830.400: 95.6527% ( 10) 00:07:54.324 9830.400 - 9880.812: 95.7276% ( 13) 00:07:54.324 9880.812 - 9931.225: 95.8083% ( 14) 00:07:54.324 9931.225 - 9981.637: 95.8833% ( 13) 00:07:54.324 9981.637 - 10032.049: 95.9698% ( 15) 00:07:54.324 10032.049 - 10082.462: 96.1255% ( 27) 00:07:54.324 10082.462 - 10132.874: 96.2581% ( 23) 00:07:54.324 10132.874 - 10183.286: 96.5060% ( 43) 00:07:54.324 10183.286 - 10233.698: 96.6386% ( 23) 00:07:54.324 10233.698 - 10284.111: 96.8058% ( 29) 00:07:54.324 10284.111 - 10334.523: 97.0076% ( 35) 00:07:54.324 10334.523 - 10384.935: 97.1748% ( 29) 00:07:54.324 10384.935 - 10435.348: 97.3997% ( 39) 00:07:54.324 10435.348 - 10485.760: 97.5554% ( 27) 00:07:54.324 10485.760 - 10536.172: 97.6188% ( 11) 00:07:54.324 10536.172 - 10586.585: 97.6937% ( 13) 00:07:54.324 10586.585 - 10636.997: 97.7629% ( 12) 00:07:54.324 10636.997 - 10687.409: 97.8725% ( 19) 00:07:54.324 10687.409 - 10737.822: 97.9301% ( 10) 00:07:54.324 10737.822 - 10788.234: 97.9993% ( 12) 00:07:54.324 10788.234 - 10838.646: 98.0281% ( 5) 00:07:54.324 10838.646 - 10889.058: 98.0685% ( 7) 00:07:54.324 10889.058 - 10939.471: 98.1146% ( 8) 00:07:54.324 10939.471 - 10989.883: 98.1665% ( 9) 00:07:54.324 10989.883 - 11040.295: 98.2011% ( 6) 00:07:54.324 11040.295 - 11090.708: 98.2299% ( 5) 00:07:54.324 11090.708 - 11141.120: 98.2472% ( 3) 00:07:54.324 11141.120 - 11191.532: 98.2645% ( 3) 00:07:54.324 11191.532 - 11241.945: 98.2934% ( 5) 00:07:54.324 11241.945 - 11292.357: 98.3741% ( 14) 00:07:54.324 11292.357 - 11342.769: 98.4606% ( 15) 00:07:54.324 11342.769 - 11393.182: 98.4779% ( 3) 00:07:54.324 11393.182 - 11443.594: 98.4894% ( 2) 00:07:54.324 11443.594 - 11494.006: 98.4952% ( 1) 00:07:54.324 11494.006 - 11544.418: 98.5067% ( 2) 00:07:54.324 11544.418 - 11594.831: 98.5182% ( 2) 00:07:54.324 11594.831 - 11645.243: 98.5240% ( 1) 00:07:54.324 18047.606 - 18148.431: 98.5298% ( 1) 00:07:54.324 18148.431 - 18249.255: 98.5528% ( 4) 00:07:54.324 18249.255 - 18350.080: 98.5816% ( 5) 00:07:54.324 18350.080 - 18450.905: 98.6047% ( 4) 00:07:54.324 18450.905 - 18551.729: 98.6335% ( 5) 00:07:54.324 18551.729 - 18652.554: 98.6624% ( 5) 00:07:54.324 18652.554 - 18753.378: 98.6912% ( 5) 00:07:54.324 18753.378 - 18854.203: 98.7200% ( 5) 00:07:54.324 18854.203 - 18955.028: 98.7488% ( 5) 00:07:54.325 18955.028 - 19055.852: 98.7777% ( 5) 00:07:54.325 19055.852 - 19156.677: 98.8065% ( 5) 00:07:54.325 19156.677 - 19257.502: 98.8353% ( 5) 00:07:54.325 19257.502 - 19358.326: 98.8642% ( 5) 00:07:54.325 19358.326 - 19459.151: 98.8872% ( 4) 00:07:54.325 19459.151 - 19559.975: 98.8930% ( 1) 00:07:54.325 23189.662 - 23290.486: 99.2620% ( 64) 00:07:54.325 23492.135 - 23592.960: 99.2678% ( 1) 00:07:54.325 23592.960 - 23693.785: 99.3369% ( 12) 00:07:54.325 23693.785 - 23794.609: 99.5445% ( 36) 00:07:54.325 23794.609 - 23895.434: 99.5849% ( 7) 00:07:54.325 23895.434 - 23996.258: 99.6252% ( 7) 00:07:54.325 23996.258 - 24097.083: 99.6598% ( 6) 00:07:54.325 24097.083 - 24197.908: 99.6887% ( 5) 00:07:54.325 24197.908 - 24298.732: 99.7175% ( 5) 00:07:54.325 24298.732 - 24399.557: 99.7405% ( 4) 00:07:54.325 24399.557 - 24500.382: 99.7694% ( 5) 00:07:54.325 24500.382 - 24601.206: 99.7982% ( 5) 00:07:54.325 24601.206 - 24702.031: 99.8270% ( 5) 00:07:54.325 24702.031 - 24802.855: 99.8559% ( 5) 00:07:54.325 24802.855 - 24903.680: 99.8789% ( 4) 00:07:54.325 24903.680 - 25004.505: 99.9077% ( 5) 00:07:54.325 25004.505 - 25105.329: 99.9366% ( 5) 00:07:54.325 25105.329 - 25206.154: 99.9654% ( 5) 00:07:54.325 25206.154 - 25306.978: 99.9942% ( 5) 00:07:54.325 25306.978 - 25407.803: 100.0000% ( 1) 00:07:54.325 00:07:54.325 06:00:19 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:54.325 00:07:54.325 real 0m2.412s 00:07:54.325 user 0m2.157s 00:07:54.325 sys 0m0.154s 00:07:54.325 ************************************ 00:07:54.325 06:00:19 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.325 06:00:19 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.325 END TEST nvme_perf 00:07:54.325 ************************************ 00:07:54.325 06:00:19 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:54.325 06:00:19 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:54.325 06:00:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.325 06:00:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.325 ************************************ 00:07:54.325 START TEST nvme_hello_world 00:07:54.325 ************************************ 00:07:54.325 06:00:19 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:54.325 Initializing NVMe Controllers 00:07:54.325 Attached to 0000:00:13.0 00:07:54.325 Namespace ID: 1 size: 1GB 00:07:54.325 Attached to 0000:00:10.0 00:07:54.325 Namespace ID: 1 size: 6GB 00:07:54.325 Attached to 0000:00:11.0 00:07:54.325 Namespace ID: 1 size: 5GB 00:07:54.325 Attached to 0000:00:12.0 00:07:54.325 Namespace ID: 1 size: 4GB 00:07:54.325 Namespace ID: 2 size: 4GB 00:07:54.325 Namespace ID: 3 size: 4GB 00:07:54.325 Initialization complete. 00:07:54.325 INFO: using host memory buffer for IO 00:07:54.325 Hello world! 00:07:54.325 INFO: using host memory buffer for IO 00:07:54.325 Hello world! 00:07:54.325 INFO: using host memory buffer for IO 00:07:54.325 Hello world! 00:07:54.325 INFO: using host memory buffer for IO 00:07:54.325 Hello world! 00:07:54.325 INFO: using host memory buffer for IO 00:07:54.325 Hello world! 00:07:54.325 INFO: using host memory buffer for IO 00:07:54.325 Hello world! 00:07:54.325 00:07:54.325 real 0m0.173s 00:07:54.325 user 0m0.060s 00:07:54.325 sys 0m0.073s 00:07:54.325 06:00:19 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.325 ************************************ 00:07:54.325 END TEST nvme_hello_world 00:07:54.325 ************************************ 00:07:54.325 06:00:19 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:54.325 06:00:19 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:54.325 06:00:19 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.325 06:00:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.325 06:00:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.325 ************************************ 00:07:54.325 START TEST nvme_sgl 00:07:54.325 ************************************ 00:07:54.325 06:00:19 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:54.632 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:54.632 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:54.632 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:54.632 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:54.632 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:54.632 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:54.632 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:54.632 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:54.632 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:54.632 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:54.632 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:54.632 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:54.632 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:54.632 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:54.632 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:54.632 NVMe Readv/Writev Request test 00:07:54.632 Attached to 0000:00:13.0 00:07:54.632 Attached to 0000:00:10.0 00:07:54.632 Attached to 0000:00:11.0 00:07:54.632 Attached to 0000:00:12.0 00:07:54.632 0000:00:10.0: build_io_request_2 test passed 00:07:54.632 0000:00:10.0: build_io_request_4 test passed 00:07:54.632 0000:00:10.0: build_io_request_5 test passed 00:07:54.632 0000:00:10.0: build_io_request_6 test passed 00:07:54.632 0000:00:10.0: build_io_request_7 test passed 00:07:54.632 0000:00:10.0: build_io_request_10 test passed 00:07:54.632 0000:00:11.0: build_io_request_2 test passed 00:07:54.632 0000:00:11.0: build_io_request_4 test passed 00:07:54.632 0000:00:11.0: build_io_request_5 test passed 00:07:54.632 0000:00:11.0: build_io_request_6 test passed 00:07:54.632 0000:00:11.0: build_io_request_7 test passed 00:07:54.632 0000:00:11.0: build_io_request_10 test passed 00:07:54.632 Cleaning up... 00:07:54.632 00:07:54.632 real 0m0.248s 00:07:54.632 user 0m0.113s 00:07:54.632 sys 0m0.081s 00:07:54.632 06:00:20 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.632 06:00:20 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 ************************************ 00:07:54.632 END TEST nvme_sgl 00:07:54.632 ************************************ 00:07:54.632 06:00:20 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:54.632 06:00:20 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.632 06:00:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.632 06:00:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 ************************************ 00:07:54.632 START TEST nvme_e2edp 00:07:54.632 ************************************ 00:07:54.632 06:00:20 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:54.889 NVMe Write/Read with End-to-End data protection test 00:07:54.889 Attached to 0000:00:13.0 00:07:54.889 Attached to 0000:00:10.0 00:07:54.889 Attached to 0000:00:11.0 00:07:54.889 Attached to 0000:00:12.0 00:07:54.889 Cleaning up... 00:07:54.889 00:07:54.889 real 0m0.170s 00:07:54.889 user 0m0.053s 00:07:54.889 sys 0m0.077s 00:07:54.889 06:00:20 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.889 ************************************ 00:07:54.889 END TEST nvme_e2edp 00:07:54.889 06:00:20 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:54.889 ************************************ 00:07:54.889 06:00:20 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:54.889 06:00:20 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.889 06:00:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.889 06:00:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.889 ************************************ 00:07:54.889 START TEST nvme_reserve 00:07:54.889 ************************************ 00:07:54.889 06:00:20 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:55.146 ===================================================== 00:07:55.146 NVMe Controller at PCI bus 0, device 19, function 0 00:07:55.146 ===================================================== 00:07:55.146 Reservations: Not Supported 00:07:55.146 ===================================================== 00:07:55.146 NVMe Controller at PCI bus 0, device 16, function 0 00:07:55.146 ===================================================== 00:07:55.146 Reservations: Not Supported 00:07:55.146 ===================================================== 00:07:55.146 NVMe Controller at PCI bus 0, device 17, function 0 00:07:55.146 ===================================================== 00:07:55.146 Reservations: Not Supported 00:07:55.146 ===================================================== 00:07:55.146 NVMe Controller at PCI bus 0, device 18, function 0 00:07:55.146 ===================================================== 00:07:55.146 Reservations: Not Supported 00:07:55.146 Reservation test passed 00:07:55.146 00:07:55.146 real 0m0.175s 00:07:55.146 user 0m0.052s 00:07:55.146 sys 0m0.080s 00:07:55.146 06:00:20 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.146 06:00:20 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:55.146 ************************************ 00:07:55.146 END TEST nvme_reserve 00:07:55.146 ************************************ 00:07:55.146 06:00:20 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:55.146 06:00:20 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.146 06:00:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.146 06:00:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.146 ************************************ 00:07:55.146 START TEST nvme_err_injection 00:07:55.146 ************************************ 00:07:55.146 06:00:20 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:55.403 NVMe Error Injection test 00:07:55.403 Attached to 0000:00:13.0 00:07:55.403 Attached to 0000:00:10.0 00:07:55.403 Attached to 0000:00:11.0 00:07:55.403 Attached to 0000:00:12.0 00:07:55.403 0000:00:13.0: get features failed as expected 00:07:55.403 0000:00:10.0: get features failed as expected 00:07:55.403 0000:00:11.0: get features failed as expected 00:07:55.403 0000:00:12.0: get features failed as expected 00:07:55.403 0000:00:13.0: get features successfully as expected 00:07:55.403 0000:00:10.0: get features successfully as expected 00:07:55.403 0000:00:11.0: get features successfully as expected 00:07:55.403 0000:00:12.0: get features successfully as expected 00:07:55.403 0000:00:13.0: read failed as expected 00:07:55.403 0000:00:10.0: read failed as expected 00:07:55.403 0000:00:11.0: read failed as expected 00:07:55.403 0000:00:12.0: read failed as expected 00:07:55.403 0000:00:13.0: read successfully as expected 00:07:55.403 0000:00:10.0: read successfully as expected 00:07:55.403 0000:00:11.0: read successfully as expected 00:07:55.403 0000:00:12.0: read successfully as expected 00:07:55.403 Cleaning up... 00:07:55.403 00:07:55.403 real 0m0.187s 00:07:55.403 user 0m0.060s 00:07:55.403 sys 0m0.084s 00:07:55.403 06:00:20 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.403 06:00:20 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:55.403 ************************************ 00:07:55.403 END TEST nvme_err_injection 00:07:55.403 ************************************ 00:07:55.403 06:00:20 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:55.403 06:00:20 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:55.403 06:00:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.403 06:00:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.403 ************************************ 00:07:55.403 START TEST nvme_overhead 00:07:55.403 ************************************ 00:07:55.404 06:00:20 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:56.772 Initializing NVMe Controllers 00:07:56.772 Attached to 0000:00:13.0 00:07:56.772 Attached to 0000:00:10.0 00:07:56.772 Attached to 0000:00:11.0 00:07:56.772 Attached to 0000:00:12.0 00:07:56.772 Initialization complete. Launching workers. 00:07:56.772 submit (in ns) avg, min, max = 11775.4, 10804.6, 399530.8 00:07:56.772 complete (in ns) avg, min, max = 8249.1, 7373.1, 45950.8 00:07:56.772 00:07:56.772 Submit histogram 00:07:56.772 ================ 00:07:56.772 Range in us Cumulative Count 00:07:56.772 10.782 - 10.831: 0.0064% ( 1) 00:07:56.772 10.978 - 11.028: 0.0192% ( 2) 00:07:56.772 11.028 - 11.077: 0.1154% ( 15) 00:07:56.772 11.077 - 11.126: 0.3142% ( 31) 00:07:56.772 11.126 - 11.175: 0.6990% ( 60) 00:07:56.772 11.175 - 11.225: 1.6867% ( 154) 00:07:56.772 11.225 - 11.274: 3.7068% ( 315) 00:07:56.772 11.274 - 11.323: 8.6577% ( 772) 00:07:56.772 11.323 - 11.372: 21.7341% ( 2039) 00:07:56.772 11.372 - 11.422: 42.5576% ( 3247) 00:07:56.772 11.422 - 11.471: 60.6747% ( 2825) 00:07:56.772 11.471 - 11.520: 73.0135% ( 1924) 00:07:56.772 11.520 - 11.569: 80.8760% ( 1226) 00:07:56.772 11.569 - 11.618: 84.6470% ( 588) 00:07:56.772 11.618 - 11.668: 87.0968% ( 382) 00:07:56.772 11.668 - 11.717: 88.3217% ( 191) 00:07:56.772 11.717 - 11.766: 89.3542% ( 161) 00:07:56.772 11.766 - 11.815: 90.0853% ( 114) 00:07:56.772 11.815 - 11.865: 90.8677% ( 122) 00:07:56.773 11.865 - 11.914: 91.7078% ( 131) 00:07:56.773 11.914 - 11.963: 92.5030% ( 124) 00:07:56.773 11.963 - 12.012: 93.1059% ( 94) 00:07:56.773 12.012 - 12.062: 93.5869% ( 75) 00:07:56.773 12.062 - 12.111: 93.9396% ( 55) 00:07:56.773 12.111 - 12.160: 94.1256% ( 29) 00:07:56.773 12.160 - 12.209: 94.2795% ( 24) 00:07:56.773 12.209 - 12.258: 94.3821% ( 16) 00:07:56.773 12.258 - 12.308: 94.4334% ( 8) 00:07:56.773 12.308 - 12.357: 94.4719% ( 6) 00:07:56.773 12.357 - 12.406: 94.5232% ( 8) 00:07:56.773 12.406 - 12.455: 94.5937% ( 11) 00:07:56.773 12.455 - 12.505: 94.6450% ( 8) 00:07:56.773 12.505 - 12.554: 94.6707% ( 4) 00:07:56.773 12.554 - 12.603: 94.6899% ( 3) 00:07:56.773 12.603 - 12.702: 94.7348% ( 7) 00:07:56.773 12.702 - 12.800: 94.7669% ( 5) 00:07:56.773 12.800 - 12.898: 94.8054% ( 6) 00:07:56.773 12.898 - 12.997: 94.8374% ( 5) 00:07:56.773 13.095 - 13.194: 94.9336% ( 15) 00:07:56.773 13.194 - 13.292: 95.0811% ( 23) 00:07:56.773 13.292 - 13.391: 95.3441% ( 41) 00:07:56.773 13.391 - 13.489: 95.6262% ( 44) 00:07:56.773 13.489 - 13.588: 96.0752% ( 70) 00:07:56.773 13.588 - 13.686: 96.4856% ( 64) 00:07:56.773 13.686 - 13.785: 96.8383% ( 55) 00:07:56.773 13.785 - 13.883: 97.0692% ( 36) 00:07:56.773 13.883 - 13.982: 97.1590% ( 14) 00:07:56.773 13.982 - 14.080: 97.3129% ( 24) 00:07:56.773 14.080 - 14.178: 97.3642% ( 8) 00:07:56.773 14.178 - 14.277: 97.4540% ( 14) 00:07:56.773 14.277 - 14.375: 97.5502% ( 15) 00:07:56.773 14.375 - 14.474: 97.6271% ( 12) 00:07:56.773 14.474 - 14.572: 97.7169% ( 14) 00:07:56.773 14.572 - 14.671: 97.7811% ( 10) 00:07:56.773 14.671 - 14.769: 97.8259% ( 7) 00:07:56.773 14.769 - 14.868: 97.8708% ( 7) 00:07:56.773 14.868 - 14.966: 97.9093% ( 6) 00:07:56.773 14.966 - 15.065: 97.9478% ( 6) 00:07:56.773 15.065 - 15.163: 97.9670% ( 3) 00:07:56.773 15.163 - 15.262: 97.9863% ( 3) 00:07:56.773 15.262 - 15.360: 97.9927% ( 1) 00:07:56.773 15.360 - 15.458: 98.0248% ( 5) 00:07:56.773 15.458 - 15.557: 98.0761% ( 8) 00:07:56.773 15.557 - 15.655: 98.1210% ( 7) 00:07:56.773 15.655 - 15.754: 98.1466% ( 4) 00:07:56.773 15.754 - 15.852: 98.1723% ( 4) 00:07:56.773 15.852 - 15.951: 98.2107% ( 6) 00:07:56.773 15.951 - 16.049: 98.2428% ( 5) 00:07:56.773 16.148 - 16.246: 98.2556% ( 2) 00:07:56.773 16.246 - 16.345: 98.2749% ( 3) 00:07:56.773 16.345 - 16.443: 98.2813% ( 1) 00:07:56.773 16.443 - 16.542: 98.3005% ( 3) 00:07:56.773 16.542 - 16.640: 98.3133% ( 2) 00:07:56.773 16.640 - 16.738: 98.3198% ( 1) 00:07:56.773 16.738 - 16.837: 98.3326% ( 2) 00:07:56.773 16.837 - 16.935: 98.3390% ( 1) 00:07:56.773 16.935 - 17.034: 98.3518% ( 2) 00:07:56.773 17.034 - 17.132: 98.3839% ( 5) 00:07:56.773 17.132 - 17.231: 98.4288% ( 7) 00:07:56.773 17.231 - 17.329: 98.4993% ( 11) 00:07:56.773 17.329 - 17.428: 98.5378% ( 6) 00:07:56.773 17.428 - 17.526: 98.6404% ( 16) 00:07:56.773 17.526 - 17.625: 98.7110% ( 11) 00:07:56.773 17.625 - 17.723: 98.8264% ( 18) 00:07:56.773 17.723 - 17.822: 98.9611% ( 21) 00:07:56.773 17.822 - 17.920: 99.0380% ( 12) 00:07:56.773 17.920 - 18.018: 99.1150% ( 12) 00:07:56.773 18.018 - 18.117: 99.1919% ( 12) 00:07:56.773 18.117 - 18.215: 99.2433% ( 8) 00:07:56.773 18.215 - 18.314: 99.3715% ( 20) 00:07:56.773 18.314 - 18.412: 99.4292% ( 9) 00:07:56.773 18.412 - 18.511: 99.4549% ( 4) 00:07:56.773 18.511 - 18.609: 99.4805% ( 4) 00:07:56.773 18.609 - 18.708: 99.5062% ( 4) 00:07:56.773 18.708 - 18.806: 99.5254% ( 3) 00:07:56.773 18.806 - 18.905: 99.5383% ( 2) 00:07:56.773 18.905 - 19.003: 99.5703% ( 5) 00:07:56.773 19.003 - 19.102: 99.5896% ( 3) 00:07:56.773 19.102 - 19.200: 99.5960% ( 1) 00:07:56.773 19.200 - 19.298: 99.6280% ( 5) 00:07:56.773 19.298 - 19.397: 99.6537% ( 4) 00:07:56.773 19.397 - 19.495: 99.6793% ( 4) 00:07:56.773 19.495 - 19.594: 99.6922% ( 2) 00:07:56.773 19.594 - 19.692: 99.6986% ( 1) 00:07:56.773 19.692 - 19.791: 99.7114% ( 2) 00:07:56.773 19.889 - 19.988: 99.7178% ( 1) 00:07:56.773 20.283 - 20.382: 99.7306% ( 2) 00:07:56.773 20.677 - 20.775: 99.7435% ( 2) 00:07:56.773 20.775 - 20.874: 99.7499% ( 1) 00:07:56.773 20.874 - 20.972: 99.7563% ( 1) 00:07:56.773 20.972 - 21.071: 99.7627% ( 1) 00:07:56.773 21.071 - 21.169: 99.7691% ( 1) 00:07:56.773 21.268 - 21.366: 99.7820% ( 2) 00:07:56.773 21.662 - 21.760: 99.7948% ( 2) 00:07:56.773 21.858 - 21.957: 99.8012% ( 1) 00:07:56.773 22.548 - 22.646: 99.8076% ( 1) 00:07:56.773 22.745 - 22.843: 99.8204% ( 2) 00:07:56.773 23.040 - 23.138: 99.8268% ( 1) 00:07:56.773 23.237 - 23.335: 99.8397% ( 2) 00:07:56.773 23.434 - 23.532: 99.8461% ( 1) 00:07:56.773 23.532 - 23.631: 99.8525% ( 1) 00:07:56.773 23.729 - 23.828: 99.8589% ( 1) 00:07:56.773 23.926 - 24.025: 99.8653% ( 1) 00:07:56.773 24.025 - 24.123: 99.8717% ( 1) 00:07:56.773 24.123 - 24.222: 99.8782% ( 1) 00:07:56.773 24.320 - 24.418: 99.8910% ( 2) 00:07:56.773 24.812 - 24.911: 99.8974% ( 1) 00:07:56.773 24.911 - 25.009: 99.9038% ( 1) 00:07:56.773 25.797 - 25.994: 99.9102% ( 1) 00:07:56.773 26.191 - 26.388: 99.9166% ( 1) 00:07:56.773 27.372 - 27.569: 99.9230% ( 1) 00:07:56.773 31.902 - 32.098: 99.9295% ( 1) 00:07:56.773 36.037 - 36.234: 99.9359% ( 1) 00:07:56.773 39.582 - 39.778: 99.9423% ( 1) 00:07:56.773 44.505 - 44.702: 99.9487% ( 1) 00:07:56.773 50.215 - 50.412: 99.9551% ( 1) 00:07:56.773 50.412 - 50.806: 99.9615% ( 1) 00:07:56.773 59.865 - 60.258: 99.9679% ( 1) 00:07:56.773 64.985 - 65.378: 99.9743% ( 1) 00:07:56.773 285.145 - 286.720: 99.9808% ( 1) 00:07:56.773 315.077 - 316.652: 99.9872% ( 1) 00:07:56.773 387.545 - 389.120: 99.9936% ( 1) 00:07:56.773 398.572 - 400.148: 100.0000% ( 1) 00:07:56.773 00:07:56.773 Complete histogram 00:07:56.773 ================== 00:07:56.773 Range in us Cumulative Count 00:07:56.773 7.335 - 7.385: 0.0064% ( 1) 00:07:56.773 7.385 - 7.434: 0.1347% ( 20) 00:07:56.773 7.434 - 7.483: 0.4233% ( 45) 00:07:56.773 7.483 - 7.532: 0.7952% ( 58) 00:07:56.773 7.532 - 7.582: 1.0261% ( 36) 00:07:56.773 7.582 - 7.631: 1.1864% ( 25) 00:07:56.773 7.631 - 7.680: 1.2890% ( 16) 00:07:56.773 7.680 - 7.729: 1.3660% ( 12) 00:07:56.773 7.729 - 7.778: 1.4173% ( 8) 00:07:56.773 7.778 - 7.828: 1.4878% ( 11) 00:07:56.773 7.828 - 7.877: 1.7829% ( 46) 00:07:56.773 7.877 - 7.926: 3.8992% ( 330) 00:07:56.773 7.926 - 7.975: 14.0576% ( 1584) 00:07:56.773 7.975 - 8.025: 32.3543% ( 2853) 00:07:56.773 8.025 - 8.074: 47.9959% ( 2439) 00:07:56.773 8.074 - 8.123: 59.5716% ( 1805) 00:07:56.773 8.123 - 8.172: 70.0378% ( 1632) 00:07:56.773 8.172 - 8.222: 79.2856% ( 1442) 00:07:56.773 8.222 - 8.271: 85.2819% ( 935) 00:07:56.773 8.271 - 8.320: 89.3991% ( 642) 00:07:56.773 8.320 - 8.369: 91.9708% ( 401) 00:07:56.773 8.369 - 8.418: 93.6895% ( 268) 00:07:56.773 8.418 - 8.468: 94.7669% ( 168) 00:07:56.774 8.468 - 8.517: 95.4018% ( 99) 00:07:56.774 8.517 - 8.566: 95.7737% ( 58) 00:07:56.774 8.566 - 8.615: 95.9790% ( 32) 00:07:56.774 8.615 - 8.665: 96.2098% ( 36) 00:07:56.774 8.665 - 8.714: 96.2996% ( 14) 00:07:56.774 8.714 - 8.763: 96.3509% ( 8) 00:07:56.774 8.763 - 8.812: 96.3894% ( 6) 00:07:56.774 8.812 - 8.862: 96.4215% ( 5) 00:07:56.774 8.862 - 8.911: 96.4535% ( 5) 00:07:56.774 8.911 - 8.960: 96.4664% ( 2) 00:07:56.774 8.960 - 9.009: 96.4792% ( 2) 00:07:56.774 9.009 - 9.058: 96.5048% ( 4) 00:07:56.774 9.058 - 9.108: 96.5497% ( 7) 00:07:56.774 9.108 - 9.157: 96.6395% ( 14) 00:07:56.774 9.157 - 9.206: 96.7485% ( 17) 00:07:56.774 9.206 - 9.255: 96.9858% ( 37) 00:07:56.774 9.255 - 9.305: 97.1333% ( 23) 00:07:56.774 9.305 - 9.354: 97.2808% ( 23) 00:07:56.774 9.354 - 9.403: 97.3770% ( 15) 00:07:56.774 9.403 - 9.452: 97.4412% ( 10) 00:07:56.774 9.452 - 9.502: 97.5053% ( 10) 00:07:56.774 9.502 - 9.551: 97.5502% ( 7) 00:07:56.774 9.551 - 9.600: 97.5951% ( 7) 00:07:56.774 9.600 - 9.649: 97.6271% ( 5) 00:07:56.774 9.649 - 9.698: 97.6464% ( 3) 00:07:56.774 9.698 - 9.748: 97.6849% ( 6) 00:07:56.774 9.748 - 9.797: 97.7041% ( 3) 00:07:56.774 9.797 - 9.846: 97.7426% ( 6) 00:07:56.774 9.846 - 9.895: 97.7939% ( 8) 00:07:56.774 9.895 - 9.945: 97.8516% ( 9) 00:07:56.774 9.945 - 9.994: 97.9414% ( 14) 00:07:56.774 9.994 - 10.043: 98.0376% ( 15) 00:07:56.774 10.043 - 10.092: 98.0953% ( 9) 00:07:56.774 10.092 - 10.142: 98.1466% ( 8) 00:07:56.774 10.142 - 10.191: 98.1979% ( 8) 00:07:56.774 10.191 - 10.240: 98.2171% ( 3) 00:07:56.774 10.240 - 10.289: 98.2364% ( 3) 00:07:56.774 10.289 - 10.338: 98.2620% ( 4) 00:07:56.774 10.338 - 10.388: 98.2877% ( 4) 00:07:56.774 10.388 - 10.437: 98.3005% ( 2) 00:07:56.774 10.437 - 10.486: 98.3069% ( 1) 00:07:56.774 10.486 - 10.535: 98.3133% ( 1) 00:07:56.774 10.585 - 10.634: 98.3198% ( 1) 00:07:56.774 10.634 - 10.683: 98.3262% ( 1) 00:07:56.774 10.683 - 10.732: 98.3326% ( 1) 00:07:56.774 10.732 - 10.782: 98.3390% ( 1) 00:07:56.774 10.831 - 10.880: 98.3518% ( 2) 00:07:56.774 10.929 - 10.978: 98.3582% ( 1) 00:07:56.774 10.978 - 11.028: 98.3647% ( 1) 00:07:56.774 11.077 - 11.126: 98.3775% ( 2) 00:07:56.774 11.126 - 11.175: 98.3903% ( 2) 00:07:56.774 11.225 - 11.274: 98.4031% ( 2) 00:07:56.774 11.323 - 11.372: 98.4095% ( 1) 00:07:56.774 11.422 - 11.471: 98.4224% ( 2) 00:07:56.774 11.569 - 11.618: 98.4288% ( 1) 00:07:56.774 11.618 - 11.668: 98.4352% ( 1) 00:07:56.774 12.012 - 12.062: 98.4416% ( 1) 00:07:56.774 12.062 - 12.111: 98.4480% ( 1) 00:07:56.774 12.898 - 12.997: 98.4544% ( 1) 00:07:56.774 13.095 - 13.194: 98.4608% ( 1) 00:07:56.774 13.588 - 13.686: 98.4993% ( 6) 00:07:56.774 13.686 - 13.785: 98.5635% ( 10) 00:07:56.774 13.785 - 13.883: 98.6019% ( 6) 00:07:56.774 13.883 - 13.982: 98.6404% ( 6) 00:07:56.774 13.982 - 14.080: 98.6532% ( 2) 00:07:56.774 14.080 - 14.178: 98.7238% ( 11) 00:07:56.774 14.178 - 14.277: 98.7879% ( 10) 00:07:56.774 14.277 - 14.375: 98.8392% ( 8) 00:07:56.774 14.375 - 14.474: 98.8905% ( 8) 00:07:56.774 14.474 - 14.572: 98.9611% ( 11) 00:07:56.774 14.572 - 14.671: 99.0380% ( 12) 00:07:56.774 14.671 - 14.769: 99.1214% ( 13) 00:07:56.774 14.769 - 14.868: 99.1855% ( 10) 00:07:56.774 14.868 - 14.966: 99.2689% ( 13) 00:07:56.774 14.966 - 15.065: 99.3523% ( 13) 00:07:56.774 15.065 - 15.163: 99.4164% ( 10) 00:07:56.774 15.163 - 15.262: 99.4805% ( 10) 00:07:56.774 15.262 - 15.360: 99.5318% ( 8) 00:07:56.774 15.360 - 15.458: 99.5639% ( 5) 00:07:56.774 15.458 - 15.557: 99.6345% ( 11) 00:07:56.774 15.557 - 15.655: 99.6409% ( 1) 00:07:56.774 15.655 - 15.754: 99.6473% ( 1) 00:07:56.774 15.754 - 15.852: 99.6729% ( 4) 00:07:56.774 15.852 - 15.951: 99.6922% ( 3) 00:07:56.774 15.951 - 16.049: 99.7050% ( 2) 00:07:56.774 16.049 - 16.148: 99.7242% ( 3) 00:07:56.774 16.443 - 16.542: 99.7306% ( 1) 00:07:56.774 16.542 - 16.640: 99.7371% ( 1) 00:07:56.774 16.640 - 16.738: 99.7499% ( 2) 00:07:56.774 16.837 - 16.935: 99.7563% ( 1) 00:07:56.774 17.132 - 17.231: 99.7691% ( 2) 00:07:56.774 17.231 - 17.329: 99.7755% ( 1) 00:07:56.774 17.428 - 17.526: 99.7820% ( 1) 00:07:56.774 17.723 - 17.822: 99.7948% ( 2) 00:07:56.774 18.018 - 18.117: 99.8012% ( 1) 00:07:56.774 18.117 - 18.215: 99.8076% ( 1) 00:07:56.774 18.314 - 18.412: 99.8140% ( 1) 00:07:56.774 18.412 - 18.511: 99.8204% ( 1) 00:07:56.774 18.511 - 18.609: 99.8268% ( 1) 00:07:56.774 19.003 - 19.102: 99.8333% ( 1) 00:07:56.774 19.200 - 19.298: 99.8461% ( 2) 00:07:56.774 19.298 - 19.397: 99.8589% ( 2) 00:07:56.774 19.495 - 19.594: 99.8653% ( 1) 00:07:56.774 19.692 - 19.791: 99.8717% ( 1) 00:07:56.774 19.791 - 19.889: 99.8846% ( 2) 00:07:56.774 19.889 - 19.988: 99.8974% ( 2) 00:07:56.774 20.086 - 20.185: 99.9166% ( 3) 00:07:56.774 20.283 - 20.382: 99.9295% ( 2) 00:07:56.774 20.578 - 20.677: 99.9359% ( 1) 00:07:56.774 20.677 - 20.775: 99.9423% ( 1) 00:07:56.774 21.169 - 21.268: 99.9487% ( 1) 00:07:56.774 22.252 - 22.351: 99.9551% ( 1) 00:07:56.774 22.449 - 22.548: 99.9615% ( 1) 00:07:56.774 23.532 - 23.631: 99.9679% ( 1) 00:07:56.774 23.828 - 23.926: 99.9743% ( 1) 00:07:56.774 26.978 - 27.175: 99.9808% ( 1) 00:07:56.774 31.508 - 31.705: 99.9872% ( 1) 00:07:56.774 32.492 - 32.689: 99.9936% ( 1) 00:07:56.774 45.883 - 46.080: 100.0000% ( 1) 00:07:56.774 00:07:56.774 00:07:56.774 real 0m1.184s 00:07:56.774 user 0m1.055s 00:07:56.774 sys 0m0.081s 00:07:56.774 06:00:22 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.774 06:00:22 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 ************************************ 00:07:56.774 END TEST nvme_overhead 00:07:56.775 ************************************ 00:07:56.775 06:00:22 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:56.775 06:00:22 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:56.775 06:00:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.775 06:00:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.775 ************************************ 00:07:56.775 START TEST nvme_arbitration 00:07:56.775 ************************************ 00:07:56.775 06:00:22 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:00.051 Initializing NVMe Controllers 00:08:00.051 Attached to 0000:00:13.0 00:08:00.051 Attached to 0000:00:10.0 00:08:00.051 Attached to 0000:00:11.0 00:08:00.051 Attached to 0000:00:12.0 00:08:00.051 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:00.051 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:00.051 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:00.051 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:00.051 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:00.051 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:00.051 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:00.051 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:00.051 Initialization complete. Launching workers. 00:08:00.051 Starting thread on core 1 with urgent priority queue 00:08:00.051 Starting thread on core 2 with urgent priority queue 00:08:00.051 Starting thread on core 3 with urgent priority queue 00:08:00.051 Starting thread on core 0 with urgent priority queue 00:08:00.051 QEMU NVMe Ctrl (12343 ) core 0: 6023.00 IO/s 16.60 secs/100000 ios 00:08:00.051 QEMU NVMe Ctrl (12342 ) core 0: 6016.00 IO/s 16.62 secs/100000 ios 00:08:00.051 QEMU NVMe Ctrl (12340 ) core 1: 5994.67 IO/s 16.68 secs/100000 ios 00:08:00.051 QEMU NVMe Ctrl (12342 ) core 1: 5994.67 IO/s 16.68 secs/100000 ios 00:08:00.051 QEMU NVMe Ctrl (12341 ) core 2: 5467.67 IO/s 18.29 secs/100000 ios 00:08:00.051 QEMU NVMe Ctrl (12342 ) core 3: 5607.00 IO/s 17.83 secs/100000 ios 00:08:00.051 ======================================================== 00:08:00.051 00:08:00.051 00:08:00.051 real 0m3.199s 00:08:00.051 user 0m9.016s 00:08:00.051 sys 0m0.107s 00:08:00.051 06:00:25 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.051 ************************************ 00:08:00.051 END TEST nvme_arbitration 00:08:00.051 ************************************ 00:08:00.051 06:00:25 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:00.052 06:00:25 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:00.052 06:00:25 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.052 06:00:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.052 06:00:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.052 ************************************ 00:08:00.052 START TEST nvme_single_aen 00:08:00.052 ************************************ 00:08:00.052 06:00:25 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:00.052 Asynchronous Event Request test 00:08:00.052 Attached to 0000:00:13.0 00:08:00.052 Attached to 0000:00:10.0 00:08:00.052 Attached to 0000:00:11.0 00:08:00.052 Attached to 0000:00:12.0 00:08:00.052 Reset controller to setup AER completions for this process 00:08:00.052 Registering asynchronous event callbacks... 00:08:00.052 Getting orig temperature thresholds of all controllers 00:08:00.052 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.052 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.052 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.052 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.052 Setting all controllers temperature threshold low to trigger AER 00:08:00.052 Waiting for all controllers temperature threshold to be set lower 00:08:00.052 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.052 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:00.052 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.052 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:00.052 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.052 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:00.052 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.052 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:00.052 Waiting for all controllers to trigger AER and reset threshold 00:08:00.052 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.052 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.052 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.052 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.052 Cleaning up... 00:08:00.052 00:08:00.052 real 0m0.191s 00:08:00.052 user 0m0.067s 00:08:00.052 sys 0m0.076s 00:08:00.052 06:00:25 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.052 ************************************ 00:08:00.052 END TEST nvme_single_aen 00:08:00.052 06:00:25 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:00.052 ************************************ 00:08:00.052 06:00:25 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:00.052 06:00:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.052 06:00:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.052 06:00:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.052 ************************************ 00:08:00.052 START TEST nvme_doorbell_aers 00:08:00.052 ************************************ 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:00.052 06:00:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:00.312 [2024-10-01 06:00:25.817868] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:10.297 Executing: test_write_invalid_db 00:08:10.297 Waiting for AER completion... 00:08:10.297 Failure: test_write_invalid_db 00:08:10.297 00:08:10.297 Executing: test_invalid_db_write_overflow_sq 00:08:10.297 Waiting for AER completion... 00:08:10.297 Failure: test_invalid_db_write_overflow_sq 00:08:10.297 00:08:10.297 Executing: test_invalid_db_write_overflow_cq 00:08:10.297 Waiting for AER completion... 00:08:10.297 Failure: test_invalid_db_write_overflow_cq 00:08:10.297 00:08:10.297 06:00:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.297 06:00:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:10.297 [2024-10-01 06:00:35.858340] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:20.302 Executing: test_write_invalid_db 00:08:20.302 Waiting for AER completion... 00:08:20.302 Failure: test_write_invalid_db 00:08:20.302 00:08:20.302 Executing: test_invalid_db_write_overflow_sq 00:08:20.302 Waiting for AER completion... 00:08:20.302 Failure: test_invalid_db_write_overflow_sq 00:08:20.302 00:08:20.302 Executing: test_invalid_db_write_overflow_cq 00:08:20.302 Waiting for AER completion... 00:08:20.302 Failure: test_invalid_db_write_overflow_cq 00:08:20.302 00:08:20.302 06:00:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:20.302 06:00:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:20.303 [2024-10-01 06:00:45.887749] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:30.272 Executing: test_write_invalid_db 00:08:30.272 Waiting for AER completion... 00:08:30.272 Failure: test_write_invalid_db 00:08:30.272 00:08:30.272 Executing: test_invalid_db_write_overflow_sq 00:08:30.272 Waiting for AER completion... 00:08:30.272 Failure: test_invalid_db_write_overflow_sq 00:08:30.272 00:08:30.272 Executing: test_invalid_db_write_overflow_cq 00:08:30.272 Waiting for AER completion... 00:08:30.272 Failure: test_invalid_db_write_overflow_cq 00:08:30.272 00:08:30.272 06:00:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.272 06:00:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:30.530 [2024-10-01 06:00:55.928484] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 Executing: test_write_invalid_db 00:08:40.551 Waiting for AER completion... 00:08:40.551 Failure: test_write_invalid_db 00:08:40.551 00:08:40.551 Executing: test_invalid_db_write_overflow_sq 00:08:40.551 Waiting for AER completion... 00:08:40.551 Failure: test_invalid_db_write_overflow_sq 00:08:40.551 00:08:40.551 Executing: test_invalid_db_write_overflow_cq 00:08:40.551 Waiting for AER completion... 00:08:40.551 Failure: test_invalid_db_write_overflow_cq 00:08:40.551 00:08:40.551 ************************************ 00:08:40.551 END TEST nvme_doorbell_aers 00:08:40.551 ************************************ 00:08:40.551 00:08:40.551 real 0m40.202s 00:08:40.551 user 0m34.188s 00:08:40.551 sys 0m5.633s 00:08:40.551 06:01:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.551 06:01:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:40.551 06:01:05 nvme -- nvme/nvme.sh@97 -- # uname 00:08:40.551 06:01:05 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:40.551 06:01:05 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:40.551 06:01:05 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:40.551 06:01:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.551 06:01:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.551 ************************************ 00:08:40.551 START TEST nvme_multi_aen 00:08:40.551 ************************************ 00:08:40.551 06:01:05 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:40.551 [2024-10-01 06:01:05.955499] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.955577] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.955589] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.957011] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.957041] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.957049] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.958338] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.958364] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.958372] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.960230] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.960333] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 [2024-10-01 06:01:05.960363] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75075) is not found. Dropping the request. 00:08:40.551 Child process pid: 75601 00:08:40.551 [Child] Asynchronous Event Request test 00:08:40.551 [Child] Attached to 0000:00:13.0 00:08:40.551 [Child] Attached to 0000:00:10.0 00:08:40.551 [Child] Attached to 0000:00:11.0 00:08:40.551 [Child] Attached to 0000:00:12.0 00:08:40.551 [Child] Registering asynchronous event callbacks... 00:08:40.551 [Child] Getting orig temperature thresholds of all controllers 00:08:40.551 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.551 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.551 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.551 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.551 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:40.551 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.551 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.551 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.551 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.551 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.551 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.551 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.551 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.551 [Child] Cleaning up... 00:08:40.551 Asynchronous Event Request test 00:08:40.551 Attached to 0000:00:13.0 00:08:40.551 Attached to 0000:00:10.0 00:08:40.551 Attached to 0000:00:11.0 00:08:40.551 Attached to 0000:00:12.0 00:08:40.551 Reset controller to setup AER completions for this process 00:08:40.551 Registering asynchronous event callbacks... 00:08:40.551 Getting orig temperature thresholds of all controllers 00:08:40.551 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.551 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.551 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.551 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.551 Setting all controllers temperature threshold low to trigger AER 00:08:40.551 Waiting for all controllers temperature threshold to be set lower 00:08:40.551 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.551 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:40.551 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.551 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:40.551 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.551 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:40.551 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.551 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:40.551 Waiting for all controllers to trigger AER and reset threshold 00:08:40.551 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.551 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.551 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.551 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.551 Cleaning up... 00:08:40.812 00:08:40.812 real 0m0.346s 00:08:40.812 user 0m0.114s 00:08:40.812 sys 0m0.143s 00:08:40.812 06:01:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.812 06:01:06 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:40.812 ************************************ 00:08:40.812 END TEST nvme_multi_aen 00:08:40.812 ************************************ 00:08:40.812 06:01:06 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:40.812 06:01:06 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:40.812 06:01:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.812 06:01:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.812 ************************************ 00:08:40.812 START TEST nvme_startup 00:08:40.812 ************************************ 00:08:40.812 06:01:06 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:40.812 Initializing NVMe Controllers 00:08:40.812 Attached to 0000:00:13.0 00:08:40.812 Attached to 0000:00:10.0 00:08:40.812 Attached to 0000:00:11.0 00:08:40.812 Attached to 0000:00:12.0 00:08:40.812 Initialization complete. 00:08:40.812 Time used:122682.414 (us). 00:08:40.812 00:08:40.812 real 0m0.180s 00:08:40.812 user 0m0.046s 00:08:40.812 sys 0m0.080s 00:08:40.812 06:01:06 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.812 06:01:06 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:40.812 ************************************ 00:08:40.812 END TEST nvme_startup 00:08:40.812 ************************************ 00:08:40.812 06:01:06 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:40.812 06:01:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.812 06:01:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.812 06:01:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.072 ************************************ 00:08:41.072 START TEST nvme_multi_secondary 00:08:41.072 ************************************ 00:08:41.072 06:01:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:41.072 06:01:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75652 00:08:41.072 06:01:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:41.072 06:01:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75653 00:08:41.072 06:01:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:41.072 06:01:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:44.411 Initializing NVMe Controllers 00:08:44.411 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.411 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.411 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.411 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.411 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:44.411 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:44.411 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:44.411 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:44.411 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:44.411 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:44.411 Initialization complete. Launching workers. 00:08:44.411 ======================================================== 00:08:44.411 Latency(us) 00:08:44.411 Device Information : IOPS MiB/s Average min max 00:08:44.411 PCIE (0000:00:13.0) NSID 1 from core 2: 1644.16 6.42 9731.07 1799.68 23331.42 00:08:44.411 PCIE (0000:00:10.0) NSID 1 from core 2: 1644.16 6.42 9732.16 1942.86 21208.25 00:08:44.411 PCIE (0000:00:11.0) NSID 1 from core 2: 1644.16 6.42 9737.65 1857.30 25911.33 00:08:44.411 PCIE (0000:00:12.0) NSID 1 from core 2: 1644.16 6.42 9751.52 1828.15 21460.05 00:08:44.411 PCIE (0000:00:12.0) NSID 2 from core 2: 1644.16 6.42 9753.93 1762.58 23000.91 00:08:44.411 PCIE (0000:00:12.0) NSID 3 from core 2: 1644.16 6.42 9754.97 1883.86 24671.96 00:08:44.411 ======================================================== 00:08:44.411 Total : 9864.95 38.53 9743.55 1762.58 25911.33 00:08:44.411 00:08:44.411 Initializing NVMe Controllers 00:08:44.411 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.411 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.411 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.411 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.411 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:44.411 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:44.411 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:44.411 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:44.411 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:44.411 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:44.411 Initialization complete. Launching workers. 00:08:44.411 ======================================================== 00:08:44.411 Latency(us) 00:08:44.411 Device Information : IOPS MiB/s Average min max 00:08:44.411 PCIE (0000:00:13.0) NSID 1 from core 1: 3994.66 15.60 4004.61 1119.60 9350.13 00:08:44.411 PCIE (0000:00:10.0) NSID 1 from core 1: 3994.66 15.60 4003.78 984.36 9234.14 00:08:44.411 PCIE (0000:00:11.0) NSID 1 from core 1: 3994.66 15.60 4005.05 1102.53 9205.85 00:08:44.411 PCIE (0000:00:12.0) NSID 1 from core 1: 3994.66 15.60 4005.03 1080.59 9169.23 00:08:44.411 PCIE (0000:00:12.0) NSID 2 from core 1: 3994.66 15.60 4005.36 1064.29 9379.31 00:08:44.411 PCIE (0000:00:12.0) NSID 3 from core 1: 3994.66 15.60 4005.31 1111.49 9525.26 00:08:44.411 ======================================================== 00:08:44.411 Total : 23967.98 93.62 4004.86 984.36 9525.26 00:08:44.411 00:08:44.411 06:01:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75652 00:08:46.320 Initializing NVMe Controllers 00:08:46.320 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.320 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.320 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.320 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.320 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:46.320 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:46.320 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:46.320 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:46.320 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:46.320 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:46.320 Initialization complete. Launching workers. 00:08:46.320 ======================================================== 00:08:46.320 Latency(us) 00:08:46.320 Device Information : IOPS MiB/s Average min max 00:08:46.320 PCIE (0000:00:13.0) NSID 1 from core 0: 4753.83 18.57 3365.16 758.67 11748.96 00:08:46.320 PCIE (0000:00:10.0) NSID 1 from core 0: 4753.83 18.57 3364.00 737.06 10543.82 00:08:46.320 PCIE (0000:00:11.0) NSID 1 from core 0: 4753.83 18.57 3365.04 754.71 10550.68 00:08:46.320 PCIE (0000:00:12.0) NSID 1 from core 0: 4753.83 18.57 3364.93 758.98 11393.46 00:08:46.320 PCIE (0000:00:12.0) NSID 2 from core 0: 4753.83 18.57 3364.82 764.16 11245.90 00:08:46.320 PCIE (0000:00:12.0) NSID 3 from core 0: 4757.03 18.58 3362.49 767.66 11454.84 00:08:46.320 ======================================================== 00:08:46.320 Total : 28526.21 111.43 3364.41 737.06 11748.96 00:08:46.320 00:08:46.320 06:01:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75653 00:08:46.320 06:01:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75722 00:08:46.320 06:01:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:46.320 06:01:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75723 00:08:46.320 06:01:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:46.320 06:01:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:49.623 Initializing NVMe Controllers 00:08:49.623 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.623 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.623 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.623 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.623 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:49.623 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:49.623 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:49.623 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:49.623 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:49.623 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:49.623 Initialization complete. Launching workers. 00:08:49.623 ======================================================== 00:08:49.623 Latency(us) 00:08:49.623 Device Information : IOPS MiB/s Average min max 00:08:49.623 PCIE (0000:00:13.0) NSID 1 from core 0: 3212.88 12.55 4979.35 1913.84 10513.17 00:08:49.623 PCIE (0000:00:10.0) NSID 1 from core 0: 3212.88 12.55 4977.87 2127.47 11766.77 00:08:49.623 PCIE (0000:00:11.0) NSID 1 from core 0: 3212.88 12.55 4979.80 1919.94 11641.28 00:08:49.623 PCIE (0000:00:12.0) NSID 1 from core 0: 3212.88 12.55 4980.04 1472.72 11314.27 00:08:49.623 PCIE (0000:00:12.0) NSID 2 from core 0: 3212.88 12.55 4980.26 1415.90 11371.00 00:08:49.623 PCIE (0000:00:12.0) NSID 3 from core 0: 3218.20 12.57 4971.88 1538.73 10840.51 00:08:49.623 ======================================================== 00:08:49.623 Total : 19282.58 75.32 4978.20 1415.90 11766.77 00:08:49.623 00:08:49.623 Initializing NVMe Controllers 00:08:49.623 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.623 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.623 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.623 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.623 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:49.623 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:49.623 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:49.623 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:49.623 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:49.623 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:49.623 Initialization complete. Launching workers. 00:08:49.623 ======================================================== 00:08:49.623 Latency(us) 00:08:49.623 Device Information : IOPS MiB/s Average min max 00:08:49.623 PCIE (0000:00:13.0) NSID 1 from core 1: 3421.73 13.37 4675.36 1193.37 11143.29 00:08:49.623 PCIE (0000:00:10.0) NSID 1 from core 1: 3421.73 13.37 4674.45 1300.11 11370.35 00:08:49.623 PCIE (0000:00:11.0) NSID 1 from core 1: 3421.73 13.37 4675.70 1453.92 12386.73 00:08:49.623 PCIE (0000:00:12.0) NSID 1 from core 1: 3421.73 13.37 4675.58 1355.51 12684.12 00:08:49.623 PCIE (0000:00:12.0) NSID 2 from core 1: 3421.73 13.37 4675.42 1410.48 12185.86 00:08:49.623 PCIE (0000:00:12.0) NSID 3 from core 1: 3427.06 13.39 4668.01 1032.11 12945.65 00:08:49.623 ======================================================== 00:08:49.623 Total : 20535.70 80.22 4674.09 1032.11 12945.65 00:08:49.623 00:08:52.167 Initializing NVMe Controllers 00:08:52.167 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.167 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.167 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.167 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.167 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:52.167 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:52.167 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:52.167 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:52.167 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:52.167 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:52.167 Initialization complete. Launching workers. 00:08:52.167 ======================================================== 00:08:52.167 Latency(us) 00:08:52.167 Device Information : IOPS MiB/s Average min max 00:08:52.167 PCIE (0000:00:13.0) NSID 1 from core 2: 1908.49 7.46 8382.71 1762.13 26680.84 00:08:52.167 PCIE (0000:00:10.0) NSID 1 from core 2: 1911.29 7.47 8368.29 1794.14 26717.63 00:08:52.167 PCIE (0000:00:11.0) NSID 1 from core 2: 1908.49 7.46 8381.65 1564.60 26489.36 00:08:52.167 PCIE (0000:00:12.0) NSID 1 from core 2: 1908.49 7.46 8380.89 1454.45 32458.27 00:08:52.167 PCIE (0000:00:12.0) NSID 2 from core 2: 1908.49 7.46 8379.62 1071.93 33956.89 00:08:52.167 PCIE (0000:00:12.0) NSID 3 from core 2: 1908.49 7.46 8378.79 974.94 34223.28 00:08:52.167 ======================================================== 00:08:52.167 Total : 11453.75 44.74 8378.66 974.94 34223.28 00:08:52.167 00:08:52.167 ************************************ 00:08:52.167 END TEST nvme_multi_secondary 00:08:52.167 ************************************ 00:08:52.167 06:01:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75722 00:08:52.167 06:01:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75723 00:08:52.167 00:08:52.167 real 0m10.787s 00:08:52.167 user 0m18.192s 00:08:52.167 sys 0m0.557s 00:08:52.167 06:01:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.167 06:01:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:52.167 06:01:17 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:52.167 06:01:17 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:52.167 06:01:17 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/74690 ]] 00:08:52.167 06:01:17 nvme -- common/autotest_common.sh@1090 -- # kill 74690 00:08:52.167 06:01:17 nvme -- common/autotest_common.sh@1091 -- # wait 74690 00:08:52.167 [2024-10-01 06:01:17.251946] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.252015] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.252031] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.252048] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.252615] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.252658] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.252672] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.252686] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.253213] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.253251] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.253264] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.253282] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.253886] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.253930] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.253944] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 [2024-10-01 06:01:17.253958] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75600) is not found. Dropping the request. 00:08:52.167 06:01:17 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:52.167 06:01:17 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:52.167 06:01:17 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:52.167 06:01:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.167 06:01:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.167 06:01:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.167 ************************************ 00:08:52.167 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:52.167 ************************************ 00:08:52.167 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:52.167 * Looking for test storage... 00:08:52.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:52.167 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:52.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.168 --rc genhtml_branch_coverage=1 00:08:52.168 --rc genhtml_function_coverage=1 00:08:52.168 --rc genhtml_legend=1 00:08:52.168 --rc geninfo_all_blocks=1 00:08:52.168 --rc geninfo_unexecuted_blocks=1 00:08:52.168 00:08:52.168 ' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:52.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.168 --rc genhtml_branch_coverage=1 00:08:52.168 --rc genhtml_function_coverage=1 00:08:52.168 --rc genhtml_legend=1 00:08:52.168 --rc geninfo_all_blocks=1 00:08:52.168 --rc geninfo_unexecuted_blocks=1 00:08:52.168 00:08:52.168 ' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:52.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.168 --rc genhtml_branch_coverage=1 00:08:52.168 --rc genhtml_function_coverage=1 00:08:52.168 --rc genhtml_legend=1 00:08:52.168 --rc geninfo_all_blocks=1 00:08:52.168 --rc geninfo_unexecuted_blocks=1 00:08:52.168 00:08:52.168 ' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:52.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.168 --rc genhtml_branch_coverage=1 00:08:52.168 --rc genhtml_function_coverage=1 00:08:52.168 --rc genhtml_legend=1 00:08:52.168 --rc geninfo_all_blocks=1 00:08:52.168 --rc geninfo_unexecuted_blocks=1 00:08:52.168 00:08:52.168 ' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75889 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75889 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 75889 ']' 00:08:52.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.168 06:01:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.168 [2024-10-01 06:01:17.616700] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:52.168 [2024-10-01 06:01:17.616818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75889 ] 00:08:52.168 [2024-10-01 06:01:17.762160] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.428 [2024-10-01 06:01:17.796607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.429 [2024-10-01 06:01:17.796802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.429 [2024-10-01 06:01:17.797036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.429 [2024-10-01 06:01:17.797454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.994 nvme0n1 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_pOQJD.txt 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.994 true 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727762478 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75913 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:52.994 06:01:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.577 [2024-10-01 06:01:20.576815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:55.577 [2024-10-01 06:01:20.577192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:55.577 [2024-10-01 06:01:20.577223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:55.577 [2024-10-01 06:01:20.577240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.577 [2024-10-01 06:01:20.579231] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.577 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75913 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75913 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75913 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_pOQJD.txt 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_pOQJD.txt 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75889 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 75889 ']' 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 75889 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75889 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.577 killing process with pid 75889 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75889' 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 75889 00:08:55.577 06:01:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 75889 00:08:55.577 06:01:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:55.577 06:01:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:55.577 00:08:55.577 real 0m3.701s 00:08:55.577 user 0m13.181s 00:08:55.577 sys 0m0.507s 00:08:55.577 06:01:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.577 06:01:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.577 ************************************ 00:08:55.577 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:55.577 ************************************ 00:08:55.577 06:01:21 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:55.577 06:01:21 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:55.577 06:01:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.577 06:01:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.577 06:01:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.577 ************************************ 00:08:55.577 START TEST nvme_fio 00:08:55.577 ************************************ 00:08:55.577 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:55.577 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:55.577 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:55.577 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:55.577 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:55.577 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:55.577 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.577 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:55.577 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.577 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:55.577 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:55.577 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:55.577 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:55.577 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:55.577 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:55.577 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:55.837 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:55.837 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:56.098 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:56.098 06:01:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:56.098 06:01:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:56.359 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:56.359 fio-3.35 00:08:56.359 Starting 1 thread 00:09:02.940 00:09:02.940 test: (groupid=0, jobs=1): err= 0: pid=76036: Tue Oct 1 06:01:27 2024 00:09:02.940 read: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(168MiB/2001msec) 00:09:02.940 slat (nsec): min=3334, max=88203, avg=4949.36, stdev=1887.20 00:09:02.940 clat (usec): min=988, max=10128, avg=2952.39, stdev=815.00 00:09:02.940 lat (usec): min=991, max=10175, avg=2957.34, stdev=815.94 00:09:02.940 clat percentiles (usec): 00:09:02.940 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:02.940 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2835], 00:09:02.940 | 70.00th=[ 2999], 80.00th=[ 3228], 90.00th=[ 3752], 95.00th=[ 4555], 00:09:02.940 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7242], 99.95th=[ 7832], 00:09:02.940 | 99.99th=[10028] 00:09:02.940 bw ( KiB/s): min=80640, max=86720, per=98.30%, avg=84658.67, stdev=3480.66, samples=3 00:09:02.940 iops : min=20160, max=21680, avg=21164.67, stdev=870.16, samples=3 00:09:02.940 write: IOPS=21.4k, BW=83.5MiB/s (87.5MB/s)(167MiB/2001msec); 0 zone resets 00:09:02.940 slat (usec): min=3, max=125, avg= 5.23, stdev= 2.05 00:09:02.940 clat (usec): min=957, max=10055, avg=2994.96, stdev=823.71 00:09:02.940 lat (usec): min=961, max=10070, avg=3000.19, stdev=824.70 00:09:02.940 clat percentiles (usec): 00:09:02.940 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:02.940 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2868], 00:09:02.940 | 70.00th=[ 3032], 80.00th=[ 3261], 90.00th=[ 3785], 95.00th=[ 4621], 00:09:02.940 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7242], 99.95th=[ 7898], 00:09:02.940 | 99.99th=[ 9634] 00:09:02.940 bw ( KiB/s): min=80528, max=86920, per=99.16%, avg=84757.33, stdev=3663.02, samples=3 00:09:02.940 iops : min=20132, max=21730, avg=21189.33, stdev=915.76, samples=3 00:09:02.940 lat (usec) : 1000=0.01% 00:09:02.940 lat (msec) : 2=1.00%, 4=91.26%, 10=7.74%, 20=0.01% 00:09:02.940 cpu : usr=99.20%, sys=0.15%, ctx=19, majf=0, minf=625 00:09:02.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:02.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.940 issued rwts: total=43081,42760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.940 00:09:02.940 Run status group 0 (all jobs): 00:09:02.940 READ: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:02.940 WRITE: bw=83.5MiB/s (87.5MB/s), 83.5MiB/s-83.5MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:02.940 ----------------------------------------------------- 00:09:02.940 Suppressions used: 00:09:02.940 count bytes template 00:09:02.940 1 32 /usr/src/fio/parse.c 00:09:02.940 1 8 libtcmalloc_minimal.so 00:09:02.940 ----------------------------------------------------- 00:09:02.940 00:09:02.940 06:01:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:02.940 06:01:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:02.940 06:01:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:02.940 06:01:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:02.940 06:01:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:02.940 06:01:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:02.940 06:01:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:02.940 06:01:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:02.940 06:01:28 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.201 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.201 fio-3.35 00:09:03.201 Starting 1 thread 00:09:09.790 00:09:09.791 test: (groupid=0, jobs=1): err= 0: pid=76091: Tue Oct 1 06:01:34 2024 00:09:09.791 read: IOPS=22.3k, BW=87.2MiB/s (91.5MB/s)(175MiB/2001msec) 00:09:09.791 slat (nsec): min=3275, max=57831, avg=4891.10, stdev=1895.48 00:09:09.791 clat (usec): min=212, max=12425, avg=2857.16, stdev=802.13 00:09:09.791 lat (usec): min=217, max=12482, avg=2862.05, stdev=803.19 00:09:09.791 clat percentiles (usec): 00:09:09.791 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:09.791 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:09.791 | 70.00th=[ 2900], 80.00th=[ 3064], 90.00th=[ 3458], 95.00th=[ 4293], 00:09:09.791 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 8455], 99.95th=[ 9503], 00:09:09.791 | 99.99th=[11994] 00:09:09.791 bw ( KiB/s): min=88424, max=91552, per=100.00%, avg=89853.33, stdev=1581.30, samples=3 00:09:09.791 iops : min=22106, max=22888, avg=22463.33, stdev=395.32, samples=3 00:09:09.791 write: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(173MiB/2001msec); 0 zone resets 00:09:09.791 slat (nsec): min=3419, max=98664, avg=5161.07, stdev=1980.65 00:09:09.791 clat (usec): min=221, max=12187, avg=2873.60, stdev=802.78 00:09:09.791 lat (usec): min=225, max=12202, avg=2878.76, stdev=803.87 00:09:09.791 clat percentiles (usec): 00:09:09.791 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:09:09.791 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:09.791 | 70.00th=[ 2900], 80.00th=[ 3064], 90.00th=[ 3490], 95.00th=[ 4293], 00:09:09.791 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 8979], 99.95th=[ 9634], 00:09:09.791 | 99.99th=[11600] 00:09:09.791 bw ( KiB/s): min=89056, max=91056, per=100.00%, avg=90029.33, stdev=1001.07, samples=3 00:09:09.791 iops : min=22264, max=22764, avg=22507.33, stdev=250.27, samples=3 00:09:09.791 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:09.791 lat (msec) : 2=1.23%, 4=92.72%, 10=5.98%, 20=0.04% 00:09:09.791 cpu : usr=99.20%, sys=0.20%, ctx=20, majf=0, minf=626 00:09:09.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:09.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.791 issued rwts: total=44685,44388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.791 00:09:09.791 Run status group 0 (all jobs): 00:09:09.791 READ: bw=87.2MiB/s (91.5MB/s), 87.2MiB/s-87.2MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:09:09.791 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=173MiB (182MB), run=2001-2001msec 00:09:09.791 ----------------------------------------------------- 00:09:09.791 Suppressions used: 00:09:09.791 count bytes template 00:09:09.791 1 32 /usr/src/fio/parse.c 00:09:09.791 1 8 libtcmalloc_minimal.so 00:09:09.791 ----------------------------------------------------- 00:09:09.791 00:09:09.791 06:01:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:09.791 06:01:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:09.791 06:01:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:09.791 06:01:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:09.791 06:01:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:09.791 06:01:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:10.052 06:01:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:10.052 06:01:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:10.052 06:01:35 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:10.313 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:10.313 fio-3.35 00:09:10.313 Starting 1 thread 00:09:18.492 00:09:18.492 test: (groupid=0, jobs=1): err= 0: pid=76146: Tue Oct 1 06:01:42 2024 00:09:18.492 read: IOPS=23.0k, BW=89.7MiB/s (94.0MB/s)(179MiB/2001msec) 00:09:18.492 slat (usec): min=4, max=170, avg= 4.76, stdev= 1.64 00:09:18.492 clat (usec): min=564, max=10790, avg=2778.53, stdev=619.16 00:09:18.492 lat (usec): min=569, max=10832, avg=2783.28, stdev=619.88 00:09:18.492 clat percentiles (usec): 00:09:18.492 | 1.00th=[ 2008], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:18.492 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2737], 00:09:18.492 | 70.00th=[ 2835], 80.00th=[ 2999], 90.00th=[ 3326], 95.00th=[ 3720], 00:09:18.492 | 99.00th=[ 5604], 99.50th=[ 6652], 99.90th=[ 7111], 99.95th=[ 8848], 00:09:18.492 | 99.99th=[10683] 00:09:18.492 bw ( KiB/s): min=89692, max=93568, per=99.28%, avg=91161.33, stdev=2101.14, samples=3 00:09:18.492 iops : min=22423, max=23392, avg=22790.33, stdev=525.29, samples=3 00:09:18.492 write: IOPS=22.8k, BW=89.1MiB/s (93.4MB/s)(178MiB/2001msec); 0 zone resets 00:09:18.492 slat (nsec): min=4337, max=68698, avg=5000.45, stdev=1449.72 00:09:18.492 clat (usec): min=292, max=10722, avg=2796.31, stdev=621.98 00:09:18.492 lat (usec): min=297, max=10736, avg=2801.31, stdev=622.70 00:09:18.492 clat percentiles (usec): 00:09:18.492 | 1.00th=[ 2040], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:18.492 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:18.492 | 70.00th=[ 2868], 80.00th=[ 3032], 90.00th=[ 3326], 95.00th=[ 3752], 00:09:18.492 | 99.00th=[ 5735], 99.50th=[ 6652], 99.90th=[ 7242], 99.95th=[ 8979], 00:09:18.492 | 99.99th=[10552] 00:09:18.492 bw ( KiB/s): min=89109, max=92776, per=100.00%, avg=91321.67, stdev=1947.57, samples=3 00:09:18.492 iops : min=22277, max=23194, avg=22830.33, stdev=487.03, samples=3 00:09:18.492 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:09:18.492 lat (msec) : 2=0.79%, 4=95.56%, 10=3.59%, 20=0.03% 00:09:18.492 cpu : usr=99.30%, sys=0.10%, ctx=4, majf=0, minf=625 00:09:18.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:18.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.492 issued rwts: total=45933,45651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.492 00:09:18.492 Run status group 0 (all jobs): 00:09:18.492 READ: bw=89.7MiB/s (94.0MB/s), 89.7MiB/s-89.7MiB/s (94.0MB/s-94.0MB/s), io=179MiB (188MB), run=2001-2001msec 00:09:18.492 WRITE: bw=89.1MiB/s (93.4MB/s), 89.1MiB/s-89.1MiB/s (93.4MB/s-93.4MB/s), io=178MiB (187MB), run=2001-2001msec 00:09:18.492 ----------------------------------------------------- 00:09:18.492 Suppressions used: 00:09:18.492 count bytes template 00:09:18.492 1 32 /usr/src/fio/parse.c 00:09:18.492 1 8 libtcmalloc_minimal.so 00:09:18.492 ----------------------------------------------------- 00:09:18.492 00:09:18.492 06:01:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:18.492 06:01:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:18.492 06:01:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:18.493 06:01:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:18.493 06:01:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:18.493 06:01:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:18.493 06:01:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:18.493 06:01:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:18.493 06:01:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:18.493 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:18.493 fio-3.35 00:09:18.493 Starting 1 thread 00:09:23.779 00:09:23.779 test: (groupid=0, jobs=1): err= 0: pid=76201: Tue Oct 1 06:01:49 2024 00:09:23.779 read: IOPS=21.6k, BW=84.3MiB/s (88.4MB/s)(169MiB/2001msec) 00:09:23.779 slat (nsec): min=4218, max=79036, avg=5059.04, stdev=1740.68 00:09:23.779 clat (usec): min=339, max=10483, avg=2952.69, stdev=789.80 00:09:23.779 lat (usec): min=344, max=10527, avg=2957.75, stdev=790.65 00:09:23.779 clat percentiles (usec): 00:09:23.779 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:23.779 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2900], 00:09:23.779 | 70.00th=[ 3097], 80.00th=[ 3359], 90.00th=[ 3785], 95.00th=[ 4359], 00:09:23.779 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 8160], 00:09:23.779 | 99.99th=[10290] 00:09:23.779 bw ( KiB/s): min=84488, max=90016, per=100.00%, avg=87848.00, stdev=2950.48, samples=3 00:09:23.779 iops : min=21122, max=22504, avg=21962.00, stdev=737.62, samples=3 00:09:23.779 write: IOPS=21.4k, BW=83.7MiB/s (87.7MB/s)(167MiB/2001msec); 0 zone resets 00:09:23.779 slat (nsec): min=4363, max=87903, avg=5332.17, stdev=1809.86 00:09:23.779 clat (usec): min=482, max=10398, avg=2981.20, stdev=806.96 00:09:23.779 lat (usec): min=487, max=10408, avg=2986.53, stdev=807.84 00:09:23.779 clat percentiles (usec): 00:09:23.779 | 1.00th=[ 2008], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2442], 00:09:23.779 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2900], 00:09:23.779 | 70.00th=[ 3097], 80.00th=[ 3359], 90.00th=[ 3851], 95.00th=[ 4490], 00:09:23.779 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 8717], 00:09:23.779 | 99.99th=[10028] 00:09:23.779 bw ( KiB/s): min=84400, max=91024, per=100.00%, avg=88037.33, stdev=3359.59, samples=3 00:09:23.779 iops : min=21100, max=22756, avg=22009.33, stdev=839.90, samples=3 00:09:23.779 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:23.779 lat (msec) : 2=0.99%, 4=91.27%, 10=7.70%, 20=0.01% 00:09:23.779 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=624 00:09:23.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:23.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:23.779 issued rwts: total=43190,42868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:23.779 00:09:23.779 Run status group 0 (all jobs): 00:09:23.779 READ: bw=84.3MiB/s (88.4MB/s), 84.3MiB/s-84.3MiB/s (88.4MB/s-88.4MB/s), io=169MiB (177MB), run=2001-2001msec 00:09:23.779 WRITE: bw=83.7MiB/s (87.7MB/s), 83.7MiB/s-83.7MiB/s (87.7MB/s-87.7MB/s), io=167MiB (176MB), run=2001-2001msec 00:09:23.779 ----------------------------------------------------- 00:09:23.779 Suppressions used: 00:09:23.779 count bytes template 00:09:23.779 1 32 /usr/src/fio/parse.c 00:09:23.779 1 8 libtcmalloc_minimal.so 00:09:23.779 ----------------------------------------------------- 00:09:23.779 00:09:23.779 ************************************ 00:09:23.779 END TEST nvme_fio 00:09:23.779 ************************************ 00:09:23.779 06:01:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:23.779 06:01:49 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:23.779 00:09:23.779 real 0m28.208s 00:09:23.779 user 0m18.399s 00:09:23.779 sys 0m17.388s 00:09:23.779 06:01:49 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.779 06:01:49 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:23.779 ************************************ 00:09:23.779 END TEST nvme 00:09:23.779 ************************************ 00:09:23.779 00:09:23.779 real 1m35.723s 00:09:23.779 user 3m33.707s 00:09:23.779 sys 0m27.348s 00:09:23.779 06:01:49 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.779 06:01:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.779 06:01:49 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:23.779 06:01:49 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:23.779 06:01:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.779 06:01:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.779 06:01:49 -- common/autotest_common.sh@10 -- # set +x 00:09:23.779 ************************************ 00:09:23.779 START TEST nvme_scc 00:09:23.779 ************************************ 00:09:23.779 06:01:49 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.040 * Looking for test storage... 00:09:24.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.040 --rc genhtml_branch_coverage=1 00:09:24.040 --rc genhtml_function_coverage=1 00:09:24.040 --rc genhtml_legend=1 00:09:24.040 --rc geninfo_all_blocks=1 00:09:24.040 --rc geninfo_unexecuted_blocks=1 00:09:24.040 00:09:24.040 ' 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.040 --rc genhtml_branch_coverage=1 00:09:24.040 --rc genhtml_function_coverage=1 00:09:24.040 --rc genhtml_legend=1 00:09:24.040 --rc geninfo_all_blocks=1 00:09:24.040 --rc geninfo_unexecuted_blocks=1 00:09:24.040 00:09:24.040 ' 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.040 --rc genhtml_branch_coverage=1 00:09:24.040 --rc genhtml_function_coverage=1 00:09:24.040 --rc genhtml_legend=1 00:09:24.040 --rc geninfo_all_blocks=1 00:09:24.040 --rc geninfo_unexecuted_blocks=1 00:09:24.040 00:09:24.040 ' 00:09:24.040 06:01:49 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.040 --rc genhtml_branch_coverage=1 00:09:24.040 --rc genhtml_function_coverage=1 00:09:24.040 --rc genhtml_legend=1 00:09:24.040 --rc geninfo_all_blocks=1 00:09:24.040 --rc geninfo_unexecuted_blocks=1 00:09:24.040 00:09:24.040 ' 00:09:24.040 06:01:49 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.040 06:01:49 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.040 06:01:49 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.040 06:01:49 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.040 06:01:49 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.040 06:01:49 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:24.040 06:01:49 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:24.040 06:01:49 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:24.040 06:01:49 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.040 06:01:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:24.041 06:01:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:24.041 06:01:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:24.041 06:01:49 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.561 Waiting for block devices as requested 00:09:24.561 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.561 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.561 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.561 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.888 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:29.888 06:01:55 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:29.888 06:01:55 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:29.889 06:01:55 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.889 06:01:55 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:29.889 06:01:55 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.889 06:01:55 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:29.889 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:29.890 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.891 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.892 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.893 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:29.894 06:01:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:29.894 06:01:55 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.894 06:01:55 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:29.895 06:01:55 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.895 06:01:55 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.895 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.896 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.897 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:29.898 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.899 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:29.900 06:01:55 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.900 06:01:55 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:29.900 06:01:55 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.900 06:01:55 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:29.900 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.901 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.902 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:29.903 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.904 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:29.905 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.906 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:29.907 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.908 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.909 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.910 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:29.911 06:01:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:30.171 06:01:55 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.171 06:01:55 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:30.171 06:01:55 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.171 06:01:55 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.171 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.172 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.173 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:30.174 06:01:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:30.174 06:01:55 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:30.174 06:01:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:30.174 06:01:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:30.174 06:01:55 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:30.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.998 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.998 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.998 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.998 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.999 06:01:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:30.999 06:01:56 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:30.999 06:01:56 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.999 06:01:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:30.999 ************************************ 00:09:30.999 START TEST nvme_simple_copy 00:09:30.999 ************************************ 00:09:30.999 06:01:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:31.257 Initializing NVMe Controllers 00:09:31.257 Attaching to 0000:00:10.0 00:09:31.257 Controller supports SCC. Attached to 0000:00:10.0 00:09:31.257 Namespace ID: 1 size: 6GB 00:09:31.257 Initialization complete. 00:09:31.257 00:09:31.257 Controller QEMU NVMe Ctrl (12340 ) 00:09:31.257 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:31.257 Namespace Block Size:4096 00:09:31.257 Writing LBAs 0 to 63 with Random Data 00:09:31.257 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:31.257 LBAs matching Written Data: 64 00:09:31.257 00:09:31.257 real 0m0.212s 00:09:31.257 user 0m0.071s 00:09:31.257 sys 0m0.041s 00:09:31.257 06:01:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.257 ************************************ 00:09:31.257 END TEST nvme_simple_copy 00:09:31.257 ************************************ 00:09:31.257 06:01:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:31.257 ************************************ 00:09:31.257 END TEST nvme_scc 00:09:31.257 ************************************ 00:09:31.257 00:09:31.257 real 0m7.430s 00:09:31.257 user 0m0.984s 00:09:31.257 sys 0m1.337s 00:09:31.257 06:01:56 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.257 06:01:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:31.257 06:01:56 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:31.257 06:01:56 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:31.257 06:01:56 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:31.257 06:01:56 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:31.257 06:01:56 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:31.257 06:01:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.257 06:01:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.257 06:01:56 -- common/autotest_common.sh@10 -- # set +x 00:09:31.257 ************************************ 00:09:31.257 START TEST nvme_fdp 00:09:31.257 ************************************ 00:09:31.257 06:01:56 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:31.515 * Looking for test storage... 00:09:31.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.515 06:01:56 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:31.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.515 --rc genhtml_branch_coverage=1 00:09:31.515 --rc genhtml_function_coverage=1 00:09:31.515 --rc genhtml_legend=1 00:09:31.515 --rc geninfo_all_blocks=1 00:09:31.515 --rc geninfo_unexecuted_blocks=1 00:09:31.515 00:09:31.515 ' 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:31.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.515 --rc genhtml_branch_coverage=1 00:09:31.515 --rc genhtml_function_coverage=1 00:09:31.515 --rc genhtml_legend=1 00:09:31.515 --rc geninfo_all_blocks=1 00:09:31.515 --rc geninfo_unexecuted_blocks=1 00:09:31.515 00:09:31.515 ' 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:31.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.515 --rc genhtml_branch_coverage=1 00:09:31.515 --rc genhtml_function_coverage=1 00:09:31.515 --rc genhtml_legend=1 00:09:31.515 --rc geninfo_all_blocks=1 00:09:31.515 --rc geninfo_unexecuted_blocks=1 00:09:31.515 00:09:31.515 ' 00:09:31.515 06:01:56 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:31.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.515 --rc genhtml_branch_coverage=1 00:09:31.515 --rc genhtml_function_coverage=1 00:09:31.515 --rc genhtml_legend=1 00:09:31.515 --rc geninfo_all_blocks=1 00:09:31.515 --rc geninfo_unexecuted_blocks=1 00:09:31.515 00:09:31.515 ' 00:09:31.515 06:01:56 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:31.515 06:01:56 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:31.515 06:01:56 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:31.515 06:01:56 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.516 06:01:56 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.516 06:01:56 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.516 06:01:56 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.516 06:01:56 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.516 06:01:56 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.516 06:01:56 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.516 06:01:56 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.516 06:01:56 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:31.516 06:01:56 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:31.516 06:01:56 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:31.516 06:01:56 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.516 06:01:56 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.037 Waiting for block devices as requested 00:09:32.037 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.037 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.037 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.037 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.321 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:37.321 06:02:02 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:37.321 06:02:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.321 06:02:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:37.321 06:02:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.321 06:02:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:37.321 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.322 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.323 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:37.324 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:37.325 06:02:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:37.326 06:02:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.326 06:02:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:37.326 06:02:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.326 06:02:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.326 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.327 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:37.328 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:37.329 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:37.330 06:02:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.330 06:02:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:37.330 06:02:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.330 06:02:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.330 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.331 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:37.332 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:37.333 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:37.598 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.599 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:37.600 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:37.601 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:37.602 06:02:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:37.602 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:37.603 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:37.604 06:02:03 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.604 06:02:03 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:37.604 06:02:03 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.604 06:02:03 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.604 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:37.605 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:37.606 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:37.607 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:37.608 06:02:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:37.608 06:02:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:37.608 06:02:03 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:37.609 06:02:03 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.758 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.758 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.758 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.758 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.758 06:02:04 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:38.758 06:02:04 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:38.758 06:02:04 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.758 06:02:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:38.758 ************************************ 00:09:38.758 START TEST nvme_flexible_data_placement 00:09:38.758 ************************************ 00:09:38.758 06:02:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.019 Initializing NVMe Controllers 00:09:39.019 Attaching to 0000:00:13.0 00:09:39.019 Controller supports FDP Attached to 0000:00:13.0 00:09:39.019 Namespace ID: 1 Endurance Group ID: 1 00:09:39.019 Initialization complete. 00:09:39.019 00:09:39.019 ================================== 00:09:39.019 == FDP tests for Namespace: #01 == 00:09:39.019 ================================== 00:09:39.019 00:09:39.019 Get Feature: FDP: 00:09:39.019 ================= 00:09:39.019 Enabled: Yes 00:09:39.019 FDP configuration Index: 0 00:09:39.019 00:09:39.019 FDP configurations log page 00:09:39.019 =========================== 00:09:39.019 Number of FDP configurations: 1 00:09:39.019 Version: 0 00:09:39.019 Size: 112 00:09:39.019 FDP Configuration Descriptor: 0 00:09:39.019 Descriptor Size: 96 00:09:39.019 Reclaim Group Identifier format: 2 00:09:39.019 FDP Volatile Write Cache: Not Present 00:09:39.019 FDP Configuration: Valid 00:09:39.019 Vendor Specific Size: 0 00:09:39.019 Number of Reclaim Groups: 2 00:09:39.019 Number of Recalim Unit Handles: 8 00:09:39.019 Max Placement Identifiers: 128 00:09:39.019 Number of Namespaces Suppprted: 256 00:09:39.019 Reclaim unit Nominal Size: 6000000 bytes 00:09:39.019 Estimated Reclaim Unit Time Limit: Not Reported 00:09:39.019 RUH Desc #000: RUH Type: Initially Isolated 00:09:39.019 RUH Desc #001: RUH Type: Initially Isolated 00:09:39.019 RUH Desc #002: RUH Type: Initially Isolated 00:09:39.019 RUH Desc #003: RUH Type: Initially Isolated 00:09:39.019 RUH Desc #004: RUH Type: Initially Isolated 00:09:39.019 RUH Desc #005: RUH Type: Initially Isolated 00:09:39.019 RUH Desc #006: RUH Type: Initially Isolated 00:09:39.019 RUH Desc #007: RUH Type: Initially Isolated 00:09:39.019 00:09:39.019 FDP reclaim unit handle usage log page 00:09:39.019 ====================================== 00:09:39.019 Number of Reclaim Unit Handles: 8 00:09:39.019 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:39.019 RUH Usage Desc #001: RUH Attributes: Unused 00:09:39.019 RUH Usage Desc #002: RUH Attributes: Unused 00:09:39.019 RUH Usage Desc #003: RUH Attributes: Unused 00:09:39.019 RUH Usage Desc #004: RUH Attributes: Unused 00:09:39.019 RUH Usage Desc #005: RUH Attributes: Unused 00:09:39.019 RUH Usage Desc #006: RUH Attributes: Unused 00:09:39.019 RUH Usage Desc #007: RUH Attributes: Unused 00:09:39.019 00:09:39.019 FDP statistics log page 00:09:39.019 ======================= 00:09:39.019 Host bytes with metadata written: 2072694784 00:09:39.019 Media bytes with metadata written: 2073133056 00:09:39.019 Media bytes erased: 0 00:09:39.019 00:09:39.019 FDP Reclaim unit handle status 00:09:39.019 ============================== 00:09:39.019 Number of RUHS descriptors: 2 00:09:39.019 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002753 00:09:39.019 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:39.019 00:09:39.019 FDP write on placement id: 0 success 00:09:39.019 00:09:39.019 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:39.019 00:09:39.019 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:39.019 00:09:39.019 Get Feature: FDP Events for Placement handle: #0 00:09:39.019 ======================== 00:09:39.019 Number of FDP Events: 6 00:09:39.019 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:39.019 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:39.019 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:39.019 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:39.019 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:39.019 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:39.019 00:09:39.019 FDP events log page 00:09:39.019 =================== 00:09:39.019 Number of FDP events: 1 00:09:39.019 FDP Event #0: 00:09:39.019 Event Type: RU Not Written to Capacity 00:09:39.019 Placement Identifier: Valid 00:09:39.019 NSID: Valid 00:09:39.019 Location: Valid 00:09:39.019 Placement Identifier: 0 00:09:39.019 Event Timestamp: 2 00:09:39.019 Namespace Identifier: 1 00:09:39.019 Reclaim Group Identifier: 0 00:09:39.019 Reclaim Unit Handle Identifier: 0 00:09:39.019 00:09:39.019 FDP test passed 00:09:39.019 ************************************ 00:09:39.019 END TEST nvme_flexible_data_placement 00:09:39.019 ************************************ 00:09:39.019 00:09:39.019 real 0m0.190s 00:09:39.019 user 0m0.044s 00:09:39.019 sys 0m0.045s 00:09:39.019 06:02:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.019 06:02:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:39.020 ************************************ 00:09:39.020 END TEST nvme_fdp 00:09:39.020 ************************************ 00:09:39.020 00:09:39.020 real 0m7.631s 00:09:39.020 user 0m1.057s 00:09:39.020 sys 0m1.357s 00:09:39.020 06:02:04 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.020 06:02:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.020 06:02:04 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:39.020 06:02:04 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.020 06:02:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.020 06:02:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.020 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:09:39.020 ************************************ 00:09:39.020 START TEST nvme_rpc 00:09:39.020 ************************************ 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.020 * Looking for test storage... 00:09:39.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.020 06:02:04 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:39.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.020 --rc genhtml_branch_coverage=1 00:09:39.020 --rc genhtml_function_coverage=1 00:09:39.020 --rc genhtml_legend=1 00:09:39.020 --rc geninfo_all_blocks=1 00:09:39.020 --rc geninfo_unexecuted_blocks=1 00:09:39.020 00:09:39.020 ' 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:39.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.020 --rc genhtml_branch_coverage=1 00:09:39.020 --rc genhtml_function_coverage=1 00:09:39.020 --rc genhtml_legend=1 00:09:39.020 --rc geninfo_all_blocks=1 00:09:39.020 --rc geninfo_unexecuted_blocks=1 00:09:39.020 00:09:39.020 ' 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:39.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.020 --rc genhtml_branch_coverage=1 00:09:39.020 --rc genhtml_function_coverage=1 00:09:39.020 --rc genhtml_legend=1 00:09:39.020 --rc geninfo_all_blocks=1 00:09:39.020 --rc geninfo_unexecuted_blocks=1 00:09:39.020 00:09:39.020 ' 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:39.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.020 --rc genhtml_branch_coverage=1 00:09:39.020 --rc genhtml_function_coverage=1 00:09:39.020 --rc genhtml_legend=1 00:09:39.020 --rc geninfo_all_blocks=1 00:09:39.020 --rc geninfo_unexecuted_blocks=1 00:09:39.020 00:09:39.020 ' 00:09:39.020 06:02:04 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.020 06:02:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.020 06:02:04 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:39.280 06:02:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:39.280 06:02:04 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77554 00:09:39.280 06:02:04 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:39.280 06:02:04 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:39.280 06:02:04 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77554 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77554 ']' 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.280 06:02:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.280 [2024-10-01 06:02:04.755587] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:39.280 [2024-10-01 06:02:04.755698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77554 ] 00:09:39.280 [2024-10-01 06:02:04.888111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.539 [2024-10-01 06:02:04.918263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.539 [2024-10-01 06:02:04.918266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.108 06:02:05 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.108 06:02:05 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:40.108 06:02:05 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:40.367 Nvme0n1 00:09:40.367 06:02:05 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:40.367 06:02:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:40.625 request: 00:09:40.625 { 00:09:40.625 "bdev_name": "Nvme0n1", 00:09:40.625 "filename": "non_existing_file", 00:09:40.625 "method": "bdev_nvme_apply_firmware", 00:09:40.625 "req_id": 1 00:09:40.625 } 00:09:40.625 Got JSON-RPC error response 00:09:40.625 response: 00:09:40.625 { 00:09:40.625 "code": -32603, 00:09:40.625 "message": "open file failed." 00:09:40.625 } 00:09:40.625 06:02:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:40.625 06:02:06 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:40.625 06:02:06 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:40.625 06:02:06 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:40.625 06:02:06 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77554 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77554 ']' 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77554 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77554 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.625 killing process with pid 77554 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77554' 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77554 00:09:40.625 06:02:06 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77554 00:09:40.886 00:09:40.886 real 0m1.944s 00:09:40.886 user 0m3.808s 00:09:40.886 sys 0m0.425s 00:09:40.886 06:02:06 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.886 ************************************ 00:09:40.886 END TEST nvme_rpc 00:09:40.886 ************************************ 00:09:40.886 06:02:06 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.886 06:02:06 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:40.886 06:02:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.886 06:02:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.886 06:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:40.886 ************************************ 00:09:40.886 START TEST nvme_rpc_timeouts 00:09:40.886 ************************************ 00:09:40.886 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:41.147 * Looking for test storage... 00:09:41.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.147 06:02:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:41.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.147 --rc genhtml_branch_coverage=1 00:09:41.147 --rc genhtml_function_coverage=1 00:09:41.147 --rc genhtml_legend=1 00:09:41.147 --rc geninfo_all_blocks=1 00:09:41.147 --rc geninfo_unexecuted_blocks=1 00:09:41.147 00:09:41.147 ' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:41.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.147 --rc genhtml_branch_coverage=1 00:09:41.147 --rc genhtml_function_coverage=1 00:09:41.147 --rc genhtml_legend=1 00:09:41.147 --rc geninfo_all_blocks=1 00:09:41.147 --rc geninfo_unexecuted_blocks=1 00:09:41.147 00:09:41.147 ' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:41.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.147 --rc genhtml_branch_coverage=1 00:09:41.147 --rc genhtml_function_coverage=1 00:09:41.147 --rc genhtml_legend=1 00:09:41.147 --rc geninfo_all_blocks=1 00:09:41.147 --rc geninfo_unexecuted_blocks=1 00:09:41.147 00:09:41.147 ' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:41.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.147 --rc genhtml_branch_coverage=1 00:09:41.147 --rc genhtml_function_coverage=1 00:09:41.147 --rc genhtml_legend=1 00:09:41.147 --rc geninfo_all_blocks=1 00:09:41.147 --rc geninfo_unexecuted_blocks=1 00:09:41.147 00:09:41.147 ' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.147 06:02:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77608 00:09:41.147 06:02:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77608 00:09:41.147 06:02:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77640 00:09:41.147 06:02:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:41.147 06:02:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77640 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 77640 ']' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.147 06:02:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 06:02:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:41.147 [2024-10-01 06:02:06.686702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:41.147 [2024-10-01 06:02:06.686815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77640 ] 00:09:41.408 [2024-10-01 06:02:06.822952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:41.408 [2024-10-01 06:02:06.855363] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.408 [2024-10-01 06:02:06.855459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.979 Checking default timeout settings: 00:09:41.979 06:02:07 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.979 06:02:07 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:41.979 06:02:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:41.979 06:02:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:42.239 Making settings changes with rpc: 00:09:42.239 06:02:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:42.239 06:02:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:42.500 Check default vs. modified settings: 00:09:42.500 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:42.500 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77608 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77608 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:42.760 Setting action_on_timeout is changed as expected. 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:42.760 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77608 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77608 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:43.020 Setting timeout_us is changed as expected. 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77608 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77608 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:43.020 Setting timeout_admin_us is changed as expected. 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77608 /tmp/settings_modified_77608 00:09:43.020 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77640 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 77640 ']' 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 77640 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77640 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.021 killing process with pid 77640 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77640' 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 77640 00:09:43.021 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 77640 00:09:43.280 06:02:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:43.280 RPC TIMEOUT SETTING TEST PASSED. 00:09:43.280 00:09:43.280 real 0m2.184s 00:09:43.280 user 0m4.430s 00:09:43.280 sys 0m0.446s 00:09:43.280 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.280 ************************************ 00:09:43.280 END TEST nvme_rpc_timeouts 00:09:43.280 ************************************ 00:09:43.280 06:02:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:43.280 06:02:08 -- spdk/autotest.sh@239 -- # uname -s 00:09:43.280 06:02:08 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:43.280 06:02:08 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:43.280 06:02:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.280 06:02:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.280 06:02:08 -- common/autotest_common.sh@10 -- # set +x 00:09:43.280 ************************************ 00:09:43.280 START TEST sw_hotplug 00:09:43.280 ************************************ 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:43.280 * Looking for test storage... 00:09:43.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.280 06:02:08 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:43.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.280 --rc genhtml_branch_coverage=1 00:09:43.280 --rc genhtml_function_coverage=1 00:09:43.280 --rc genhtml_legend=1 00:09:43.280 --rc geninfo_all_blocks=1 00:09:43.280 --rc geninfo_unexecuted_blocks=1 00:09:43.280 00:09:43.280 ' 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:43.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.280 --rc genhtml_branch_coverage=1 00:09:43.280 --rc genhtml_function_coverage=1 00:09:43.280 --rc genhtml_legend=1 00:09:43.280 --rc geninfo_all_blocks=1 00:09:43.280 --rc geninfo_unexecuted_blocks=1 00:09:43.280 00:09:43.280 ' 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:43.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.280 --rc genhtml_branch_coverage=1 00:09:43.280 --rc genhtml_function_coverage=1 00:09:43.280 --rc genhtml_legend=1 00:09:43.280 --rc geninfo_all_blocks=1 00:09:43.280 --rc geninfo_unexecuted_blocks=1 00:09:43.280 00:09:43.280 ' 00:09:43.280 06:02:08 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:43.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.280 --rc genhtml_branch_coverage=1 00:09:43.280 --rc genhtml_function_coverage=1 00:09:43.280 --rc genhtml_legend=1 00:09:43.280 --rc geninfo_all_blocks=1 00:09:43.280 --rc geninfo_unexecuted_blocks=1 00:09:43.280 00:09:43.280 ' 00:09:43.280 06:02:08 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.847 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:43.847 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:43.847 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:43.847 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:43.847 06:02:09 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:43.847 06:02:09 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:43.847 06:02:09 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:43.847 06:02:09 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:43.847 06:02:09 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:43.847 06:02:09 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:43.847 06:02:09 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:43.847 06:02:09 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:44.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.368 Waiting for block devices as requested 00:09:44.368 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.368 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.629 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.629 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.919 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:49.919 06:02:15 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:49.919 06:02:15 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.179 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:50.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.179 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:50.439 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:50.699 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.699 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.699 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:50.699 06:02:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:50.960 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78480 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:50.961 06:02:16 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:50.961 06:02:16 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:50.961 06:02:16 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:50.961 06:02:16 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:50.961 06:02:16 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:50.961 06:02:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:50.961 Initializing NVMe Controllers 00:09:50.961 Attaching to 0000:00:10.0 00:09:50.961 Attaching to 0000:00:11.0 00:09:50.961 Attached to 0000:00:10.0 00:09:50.961 Attached to 0000:00:11.0 00:09:50.961 Initialization complete. Starting I/O... 00:09:50.961 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:50.961 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:50.961 00:09:52.347 QEMU NVMe Ctrl (12340 ): 2492 I/Os completed (+2492) 00:09:52.347 QEMU NVMe Ctrl (12341 ): 2492 I/Os completed (+2492) 00:09:52.347 00:09:53.283 QEMU NVMe Ctrl (12340 ): 5617 I/Os completed (+3125) 00:09:53.283 QEMU NVMe Ctrl (12341 ): 5620 I/Os completed (+3128) 00:09:53.283 00:09:54.217 QEMU NVMe Ctrl (12340 ): 9607 I/Os completed (+3990) 00:09:54.217 QEMU NVMe Ctrl (12341 ): 9548 I/Os completed (+3928) 00:09:54.217 00:09:55.149 QEMU NVMe Ctrl (12340 ): 14543 I/Os completed (+4936) 00:09:55.149 QEMU NVMe Ctrl (12341 ): 13911 I/Os completed (+4363) 00:09:55.149 00:09:56.135 QEMU NVMe Ctrl (12340 ): 19075 I/Os completed (+4532) 00:09:56.135 QEMU NVMe Ctrl (12341 ): 18299 I/Os completed (+4388) 00:09:56.135 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.070 [2024-10-01 06:02:22.373039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:57.070 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:57.070 [2024-10-01 06:02:22.374114] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.374169] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.374189] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.374205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:57.070 [2024-10-01 06:02:22.375378] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.375414] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.375428] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.375442] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.070 [2024-10-01 06:02:22.395624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:57.070 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:57.070 [2024-10-01 06:02:22.396572] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.396611] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.396628] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.396644] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:57.070 [2024-10-01 06:02:22.397720] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.397752] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.397769] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 [2024-10-01 06:02:22.397782] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:57.070 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:57.070 EAL: Scan for (pci) bus failed. 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:57.070 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:57.070 Attaching to 0000:00:10.0 00:09:57.070 Attached to 0000:00:10.0 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.070 06:02:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:57.070 Attaching to 0000:00:11.0 00:09:57.070 Attached to 0000:00:11.0 00:09:58.001 QEMU NVMe Ctrl (12340 ): 3741 I/Os completed (+3741) 00:09:58.001 QEMU NVMe Ctrl (12341 ): 3530 I/Os completed (+3530) 00:09:58.001 00:09:58.932 QEMU NVMe Ctrl (12340 ): 7229 I/Os completed (+3488) 00:09:58.932 QEMU NVMe Ctrl (12341 ): 7415 I/Os completed (+3885) 00:09:58.932 00:10:00.301 QEMU NVMe Ctrl (12340 ): 10859 I/Os completed (+3630) 00:10:00.301 QEMU NVMe Ctrl (12341 ): 11281 I/Os completed (+3866) 00:10:00.301 00:10:01.233 QEMU NVMe Ctrl (12340 ): 14313 I/Os completed (+3454) 00:10:01.233 QEMU NVMe Ctrl (12341 ): 15033 I/Os completed (+3752) 00:10:01.233 00:10:02.165 QEMU NVMe Ctrl (12340 ): 18826 I/Os completed (+4513) 00:10:02.165 QEMU NVMe Ctrl (12341 ): 19479 I/Os completed (+4446) 00:10:02.165 00:10:03.098 QEMU NVMe Ctrl (12340 ): 23534 I/Os completed (+4708) 00:10:03.098 QEMU NVMe Ctrl (12341 ): 24028 I/Os completed (+4549) 00:10:03.098 00:10:04.029 QEMU NVMe Ctrl (12340 ): 28021 I/Os completed (+4487) 00:10:04.029 QEMU NVMe Ctrl (12341 ): 28388 I/Os completed (+4360) 00:10:04.029 00:10:04.961 QEMU NVMe Ctrl (12340 ): 32459 I/Os completed (+4438) 00:10:04.961 QEMU NVMe Ctrl (12341 ): 32781 I/Os completed (+4393) 00:10:04.961 00:10:06.349 QEMU NVMe Ctrl (12340 ): 36664 I/Os completed (+4205) 00:10:06.349 QEMU NVMe Ctrl (12341 ): 36915 I/Os completed (+4134) 00:10:06.349 00:10:07.283 QEMU NVMe Ctrl (12340 ): 40143 I/Os completed (+3479) 00:10:07.283 QEMU NVMe Ctrl (12341 ): 40809 I/Os completed (+3894) 00:10:07.283 00:10:08.215 QEMU NVMe Ctrl (12340 ): 44089 I/Os completed (+3946) 00:10:08.215 QEMU NVMe Ctrl (12341 ): 44817 I/Os completed (+4008) 00:10:08.215 00:10:09.147 QEMU NVMe Ctrl (12340 ): 48407 I/Os completed (+4318) 00:10:09.148 QEMU NVMe Ctrl (12341 ): 49210 I/Os completed (+4393) 00:10:09.148 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.148 [2024-10-01 06:02:34.650093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:09.148 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:09.148 [2024-10-01 06:02:34.651169] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.651214] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.651232] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.651251] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.148 [2024-10-01 06:02:34.652587] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.652624] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.652638] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.652652] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.148 [2024-10-01 06:02:34.671532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:09.148 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:09.148 [2024-10-01 06:02:34.672460] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.672495] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.672512] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.672527] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.148 [2024-10-01 06:02:34.673592] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.673620] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.673637] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 [2024-10-01 06:02:34.673649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.148 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:09.148 EAL: Scan for (pci) bus failed. 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.148 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.407 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.407 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.407 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.407 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.407 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.407 Attaching to 0000:00:10.0 00:10:09.407 Attached to 0000:00:10.0 00:10:09.407 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.407 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.407 06:02:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:09.407 Attaching to 0000:00:11.0 00:10:09.407 Attached to 0000:00:11.0 00:10:10.030 QEMU NVMe Ctrl (12340 ): 2492 I/Os completed (+2492) 00:10:10.030 QEMU NVMe Ctrl (12341 ): 2512 I/Os completed (+2512) 00:10:10.030 00:10:10.971 QEMU NVMe Ctrl (12340 ): 5925 I/Os completed (+3433) 00:10:10.971 QEMU NVMe Ctrl (12341 ): 5972 I/Os completed (+3460) 00:10:10.971 00:10:12.356 QEMU NVMe Ctrl (12340 ): 9344 I/Os completed (+3419) 00:10:12.356 QEMU NVMe Ctrl (12341 ): 9808 I/Os completed (+3836) 00:10:12.356 00:10:13.298 QEMU NVMe Ctrl (12340 ): 12849 I/Os completed (+3505) 00:10:13.298 QEMU NVMe Ctrl (12341 ): 13353 I/Os completed (+3545) 00:10:13.298 00:10:14.235 QEMU NVMe Ctrl (12340 ): 16311 I/Os completed (+3462) 00:10:14.235 QEMU NVMe Ctrl (12341 ): 16781 I/Os completed (+3428) 00:10:14.235 00:10:15.173 QEMU NVMe Ctrl (12340 ): 19676 I/Os completed (+3365) 00:10:15.173 QEMU NVMe Ctrl (12341 ): 20182 I/Os completed (+3401) 00:10:15.173 00:10:16.114 QEMU NVMe Ctrl (12340 ): 22894 I/Os completed (+3218) 00:10:16.114 QEMU NVMe Ctrl (12341 ): 23515 I/Os completed (+3333) 00:10:16.114 00:10:17.053 QEMU NVMe Ctrl (12340 ): 26172 I/Os completed (+3278) 00:10:17.053 QEMU NVMe Ctrl (12341 ): 27562 I/Os completed (+4047) 00:10:17.053 00:10:17.993 QEMU NVMe Ctrl (12340 ): 30186 I/Os completed (+4014) 00:10:17.993 QEMU NVMe Ctrl (12341 ): 32096 I/Os completed (+4534) 00:10:17.993 00:10:18.934 QEMU NVMe Ctrl (12340 ): 33640 I/Os completed (+3454) 00:10:18.934 QEMU NVMe Ctrl (12341 ): 36529 I/Os completed (+4433) 00:10:18.934 00:10:20.367 QEMU NVMe Ctrl (12340 ): 37361 I/Os completed (+3721) 00:10:20.367 QEMU NVMe Ctrl (12341 ): 41195 I/Os completed (+4666) 00:10:20.367 00:10:20.939 QEMU NVMe Ctrl (12340 ): 40866 I/Os completed (+3505) 00:10:20.939 QEMU NVMe Ctrl (12341 ): 45450 I/Os completed (+4255) 00:10:20.939 00:10:21.510 06:02:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:21.510 06:02:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:21.510 06:02:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.510 06:02:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.510 [2024-10-01 06:02:46.927650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:21.510 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:21.510 [2024-10-01 06:02:46.928710] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.928759] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.928785] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.928804] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:21.510 [2024-10-01 06:02:46.930437] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.930474] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.930488] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.930502] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 06:02:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.510 06:02:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.510 [2024-10-01 06:02:46.949866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:21.510 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:21.510 [2024-10-01 06:02:46.950799] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.950832] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.950862] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.950876] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:21.510 [2024-10-01 06:02:46.951948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.951981] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.951996] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 [2024-10-01 06:02:46.952009] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.510 06:02:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:21.510 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:21.510 06:02:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:21.510 EAL: Scan for (pci) bus failed. 00:10:21.510 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:21.510 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:21.510 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:21.510 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:21.510 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.510 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:21.510 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:21.510 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:21.510 Attaching to 0000:00:10.0 00:10:21.510 Attached to 0000:00:10.0 00:10:21.769 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:21.769 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.769 06:02:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:21.769 Attaching to 0000:00:11.0 00:10:21.769 Attached to 0000:00:11.0 00:10:21.769 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:21.769 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:21.769 [2024-10-01 06:02:47.203527] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:33.979 06:02:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:33.979 06:02:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:33.979 06:02:59 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.83 00:10:33.979 06:02:59 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.83 00:10:33.979 06:02:59 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:33.979 06:02:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:10:33.979 06:02:59 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:10:33.980 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 06:02:59 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78480 00:10:40.561 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78480) - No such process 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78480 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79033 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79033 00:10:40.561 06:03:05 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:40.561 06:03:05 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79033 ']' 00:10:40.561 06:03:05 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.561 06:03:05 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.561 06:03:05 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.561 06:03:05 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.561 06:03:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.561 [2024-10-01 06:03:05.281079] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:40.561 [2024-10-01 06:03:05.281193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79033 ] 00:10:40.561 [2024-10-01 06:03:05.410774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.561 [2024-10-01 06:03:05.442928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:40.561 06:03:06 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:40.561 06:03:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.138 06:03:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.138 06:03:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.138 06:03:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.138 [2024-10-01 06:03:12.222222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:47.138 [2024-10-01 06:03:12.223470] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.138 [2024-10-01 06:03:12.223501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.138 [2024-10-01 06:03:12.223517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.138 [2024-10-01 06:03:12.223530] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.138 [2024-10-01 06:03:12.223538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.138 [2024-10-01 06:03:12.223545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.138 [2024-10-01 06:03:12.223554] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.138 [2024-10-01 06:03:12.223560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.138 [2024-10-01 06:03:12.223568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.138 [2024-10-01 06:03:12.223574] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.138 [2024-10-01 06:03:12.223582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.138 [2024-10-01 06:03:12.223588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.138 [2024-10-01 06:03:12.622225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:47.138 [2024-10-01 06:03:12.623285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.138 [2024-10-01 06:03:12.623314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.138 [2024-10-01 06:03:12.623324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.138 [2024-10-01 06:03:12.623338] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.138 [2024-10-01 06:03:12.623345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.138 [2024-10-01 06:03:12.623353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.138 [2024-10-01 06:03:12.623360] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.138 [2024-10-01 06:03:12.623368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.138 [2024-10-01 06:03:12.623374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.138 [2024-10-01 06:03:12.623384] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.138 [2024-10-01 06:03:12.623391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.138 [2024-10-01 06:03:12.623399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.138 06:03:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.138 06:03:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.138 06:03:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:47.138 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.397 06:03:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:59.594 06:03:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:59.594 06:03:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:59.594 06:03:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:59.594 06:03:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.594 06:03:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.594 06:03:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.594 06:03:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.594 06:03:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.594 06:03:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.594 06:03:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.594 06:03:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.594 06:03:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:59.594 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:59.594 [2024-10-01 06:03:25.122481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:59.594 [2024-10-01 06:03:25.123601] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.594 [2024-10-01 06:03:25.123631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.594 [2024-10-01 06:03:25.123644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.594 [2024-10-01 06:03:25.123659] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.594 [2024-10-01 06:03:25.123668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.594 [2024-10-01 06:03:25.123675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.594 [2024-10-01 06:03:25.123684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.594 [2024-10-01 06:03:25.123691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.594 [2024-10-01 06:03:25.123699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.594 [2024-10-01 06:03:25.123705] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.594 [2024-10-01 06:03:25.123714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.594 [2024-10-01 06:03:25.123721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.160 [2024-10-01 06:03:25.522482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:00.160 [2024-10-01 06:03:25.523729] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.160 [2024-10-01 06:03:25.523763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.160 [2024-10-01 06:03:25.523773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.160 [2024-10-01 06:03:25.523786] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.160 [2024-10-01 06:03:25.523794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.160 [2024-10-01 06:03:25.523803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.160 [2024-10-01 06:03:25.523810] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.160 [2024-10-01 06:03:25.523818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.160 [2024-10-01 06:03:25.523825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.160 [2024-10-01 06:03:25.523835] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.160 [2024-10-01 06:03:25.523841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.160 [2024-10-01 06:03:25.523863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.160 06:03:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.160 06:03:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.160 06:03:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:00.160 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:00.417 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.417 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.417 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.417 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:00.417 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:00.417 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.417 06:03:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.653 06:03:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.653 06:03:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.653 06:03:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.653 [2024-10-01 06:03:37.922751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:12.653 [2024-10-01 06:03:37.924167] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.653 [2024-10-01 06:03:37.924279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.653 [2024-10-01 06:03:37.924360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.653 [2024-10-01 06:03:37.924419] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.653 [2024-10-01 06:03:37.924441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.653 [2024-10-01 06:03:37.924464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.653 [2024-10-01 06:03:37.924491] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.653 [2024-10-01 06:03:37.924541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.653 [2024-10-01 06:03:37.924568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.653 [2024-10-01 06:03:37.924593] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.653 [2024-10-01 06:03:37.924611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.653 [2024-10-01 06:03:37.924666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.653 06:03:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.653 06:03:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.653 06:03:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:12.653 06:03:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:12.910 [2024-10-01 06:03:38.322760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:12.910 [2024-10-01 06:03:38.323838] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.910 [2024-10-01 06:03:38.323885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.910 [2024-10-01 06:03:38.323896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.910 [2024-10-01 06:03:38.323910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.911 [2024-10-01 06:03:38.323918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.911 [2024-10-01 06:03:38.323928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.911 [2024-10-01 06:03:38.323935] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.911 [2024-10-01 06:03:38.323943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.911 [2024-10-01 06:03:38.323950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.911 [2024-10-01 06:03:38.323959] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.911 [2024-10-01 06:03:38.323966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.911 [2024-10-01 06:03:38.323974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.911 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:12.911 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.911 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.911 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.911 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.911 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.911 06:03:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.911 06:03:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.911 06:03:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.168 06:03:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.354 06:03:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.354 06:03:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.354 06:03:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:25.354 06:03:50 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.65 00:11:25.354 06:03:50 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.65 00:11:25.354 06:03:50 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:11:25.354 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:11:25.355 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:25.355 06:03:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:25.355 06:03:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.912 06:03:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.912 06:03:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.912 06:03:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:31.912 06:03:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.912 [2024-10-01 06:03:56.909094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:31.912 [2024-10-01 06:03:56.909935] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.912 [2024-10-01 06:03:56.909957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.912 [2024-10-01 06:03:56.909970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.912 [2024-10-01 06:03:56.909985] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.912 [2024-10-01 06:03:56.909993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.912 [2024-10-01 06:03:56.910000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.912 [2024-10-01 06:03:56.910009] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.912 [2024-10-01 06:03:56.910015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.912 [2024-10-01 06:03:56.910025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.912 [2024-10-01 06:03:56.910032] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.912 [2024-10-01 06:03:56.910040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.912 [2024-10-01 06:03:56.910046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.912 [2024-10-01 06:03:57.309094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:31.912 [2024-10-01 06:03:57.309860] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.912 [2024-10-01 06:03:57.309890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.912 [2024-10-01 06:03:57.309900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.912 [2024-10-01 06:03:57.309912] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.912 [2024-10-01 06:03:57.309920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.912 [2024-10-01 06:03:57.309929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.912 [2024-10-01 06:03:57.309937] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.912 [2024-10-01 06:03:57.309945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.912 [2024-10-01 06:03:57.309952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.912 [2024-10-01 06:03:57.309960] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.912 [2024-10-01 06:03:57.309967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.912 [2024-10-01 06:03:57.309977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.912 06:03:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.912 06:03:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.912 06:03:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.912 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:32.170 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:32.170 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.170 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.170 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.170 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:32.170 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:32.170 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.170 06:03:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.424 06:04:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.424 06:04:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.424 06:04:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.424 06:04:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.424 06:04:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.424 06:04:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:44.424 06:04:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.424 [2024-10-01 06:04:09.809390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:44.424 [2024-10-01 06:04:09.810346] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.424 [2024-10-01 06:04:09.810372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.424 [2024-10-01 06:04:09.810387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.424 [2024-10-01 06:04:09.810402] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.424 [2024-10-01 06:04:09.810412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.424 [2024-10-01 06:04:09.810421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.424 [2024-10-01 06:04:09.810430] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.424 [2024-10-01 06:04:09.810438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.424 [2024-10-01 06:04:09.810448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.424 [2024-10-01 06:04:09.810456] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.424 [2024-10-01 06:04:09.810464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.424 [2024-10-01 06:04:09.810472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.682 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:44.682 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.682 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.682 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.682 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.682 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.682 06:04:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.683 06:04:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.683 06:04:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.940 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:44.940 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.940 [2024-10-01 06:04:10.309393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:44.940 [2024-10-01 06:04:10.310243] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.940 [2024-10-01 06:04:10.310283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.940 [2024-10-01 06:04:10.310294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.940 [2024-10-01 06:04:10.310312] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.940 [2024-10-01 06:04:10.310320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.940 [2024-10-01 06:04:10.310329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.940 [2024-10-01 06:04:10.310336] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.940 [2024-10-01 06:04:10.310344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.940 [2024-10-01 06:04:10.310350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.940 [2024-10-01 06:04:10.310359] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.940 [2024-10-01 06:04:10.310365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.940 [2024-10-01 06:04:10.310374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.198 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:45.198 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.198 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.455 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.455 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.455 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.455 06:04:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.455 06:04:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.455 06:04:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.455 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:45.455 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:45.455 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.455 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.456 06:04:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:45.456 06:04:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:45.456 06:04:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.456 06:04:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.456 06:04:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.456 06:04:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:45.713 06:04:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:45.713 06:04:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.713 06:04:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.940 06:04:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.940 06:04:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.940 06:04:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.940 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.941 [2024-10-01 06:04:23.209645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:57.941 [2024-10-01 06:04:23.210691] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.941 [2024-10-01 06:04:23.210784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.941 [2024-10-01 06:04:23.210825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.941 [2024-10-01 06:04:23.210868] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.941 [2024-10-01 06:04:23.210892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.941 [2024-10-01 06:04:23.210920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.941 [2024-10-01 06:04:23.210945] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.941 [2024-10-01 06:04:23.210963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.941 [2024-10-01 06:04:23.210990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.941 [2024-10-01 06:04:23.211014] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.941 [2024-10-01 06:04:23.211033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.941 [2024-10-01 06:04:23.211063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.941 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:57.941 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.941 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.941 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.941 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.941 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.941 06:04:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.941 06:04:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.941 06:04:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.941 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:57.941 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.199 [2024-10-01 06:04:23.609646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:58.199 [2024-10-01 06:04:23.610554] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.199 [2024-10-01 06:04:23.610588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.199 [2024-10-01 06:04:23.610599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.199 [2024-10-01 06:04:23.610612] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.199 [2024-10-01 06:04:23.610620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.199 [2024-10-01 06:04:23.610630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.199 [2024-10-01 06:04:23.610637] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.199 [2024-10-01 06:04:23.610648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.199 [2024-10-01 06:04:23.610655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.199 [2024-10-01 06:04:23.610663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.199 [2024-10-01 06:04:23.610669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.199 [2024-10-01 06:04:23.610678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.199 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:58.199 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.199 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.199 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.199 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.199 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.199 06:04:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.199 06:04:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.199 06:04:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.199 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:58.200 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.457 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.457 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.457 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:58.457 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:58.458 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.458 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.458 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.458 06:04:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:58.458 06:04:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:58.458 06:04:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.458 06:04:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.29 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.29 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.29 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.29 2 00:12:10.659 remove_attach_helper took 45.29s to complete (handling 2 nvme drive(s)) 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:10.659 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79033 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79033 ']' 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79033 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79033 00:12:10.659 killing process with pid 79033 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79033' 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79033 00:12:10.659 06:04:36 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79033 00:12:10.917 06:04:36 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:11.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.747 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:11.747 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:11.747 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.747 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.007 00:12:12.007 real 2m28.678s 00:12:12.007 user 1m49.466s 00:12:12.007 sys 0m17.740s 00:12:12.007 ************************************ 00:12:12.007 END TEST sw_hotplug 00:12:12.007 06:04:37 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.007 06:04:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.007 ************************************ 00:12:12.007 06:04:37 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:12.007 06:04:37 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:12.007 06:04:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:12.007 06:04:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.007 06:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:12.007 ************************************ 00:12:12.007 START TEST nvme_xnvme 00:12:12.007 ************************************ 00:12:12.007 06:04:37 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:12.007 * Looking for test storage... 00:12:12.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:12.007 06:04:37 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:12.007 06:04:37 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:12.007 06:04:37 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:12.007 06:04:37 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.007 06:04:37 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:12.265 06:04:37 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.265 06:04:37 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:12.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.265 --rc genhtml_branch_coverage=1 00:12:12.265 --rc genhtml_function_coverage=1 00:12:12.265 --rc genhtml_legend=1 00:12:12.265 --rc geninfo_all_blocks=1 00:12:12.265 --rc geninfo_unexecuted_blocks=1 00:12:12.265 00:12:12.265 ' 00:12:12.265 06:04:37 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:12.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.265 --rc genhtml_branch_coverage=1 00:12:12.265 --rc genhtml_function_coverage=1 00:12:12.265 --rc genhtml_legend=1 00:12:12.265 --rc geninfo_all_blocks=1 00:12:12.265 --rc geninfo_unexecuted_blocks=1 00:12:12.265 00:12:12.265 ' 00:12:12.265 06:04:37 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:12.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.265 --rc genhtml_branch_coverage=1 00:12:12.265 --rc genhtml_function_coverage=1 00:12:12.265 --rc genhtml_legend=1 00:12:12.265 --rc geninfo_all_blocks=1 00:12:12.265 --rc geninfo_unexecuted_blocks=1 00:12:12.265 00:12:12.265 ' 00:12:12.265 06:04:37 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:12.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.265 --rc genhtml_branch_coverage=1 00:12:12.265 --rc genhtml_function_coverage=1 00:12:12.265 --rc genhtml_legend=1 00:12:12.265 --rc geninfo_all_blocks=1 00:12:12.265 --rc geninfo_unexecuted_blocks=1 00:12:12.265 00:12:12.265 ' 00:12:12.265 06:04:37 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.265 06:04:37 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.265 06:04:37 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.265 06:04:37 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.265 06:04:37 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.265 06:04:37 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:12.265 06:04:37 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.265 06:04:37 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:12.265 06:04:37 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:12.265 06:04:37 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.265 06:04:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:12.265 ************************************ 00:12:12.265 START TEST xnvme_to_malloc_dd_copy 00:12:12.265 ************************************ 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:12.265 06:04:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:12.265 { 00:12:12.265 "subsystems": [ 00:12:12.265 { 00:12:12.265 "subsystem": "bdev", 00:12:12.265 "config": [ 00:12:12.265 { 00:12:12.265 "params": { 00:12:12.265 "block_size": 512, 00:12:12.265 "num_blocks": 2097152, 00:12:12.265 "name": "malloc0" 00:12:12.265 }, 00:12:12.265 "method": "bdev_malloc_create" 00:12:12.265 }, 00:12:12.265 { 00:12:12.265 "params": { 00:12:12.265 "io_mechanism": "libaio", 00:12:12.266 "filename": "/dev/nullb0", 00:12:12.266 "name": "null0" 00:12:12.266 }, 00:12:12.266 "method": "bdev_xnvme_create" 00:12:12.266 }, 00:12:12.266 { 00:12:12.266 "method": "bdev_wait_for_examine" 00:12:12.266 } 00:12:12.266 ] 00:12:12.266 } 00:12:12.266 ] 00:12:12.266 } 00:12:12.266 [2024-10-01 06:04:37.730513] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:12.266 [2024-10-01 06:04:37.730641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80390 ] 00:12:12.266 [2024-10-01 06:04:37.864404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.523 [2024-10-01 06:04:37.907407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.614  Copying: 299/1024 [MB] (299 MBps) Copying: 600/1024 [MB] (300 MBps) Copying: 901/1024 [MB] (301 MBps) Copying: 1024/1024 [MB] (average 300 MBps) 00:12:16.614 00:12:16.614 06:04:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:16.614 06:04:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:16.614 06:04:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:16.614 06:04:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:16.614 { 00:12:16.614 "subsystems": [ 00:12:16.614 { 00:12:16.614 "subsystem": "bdev", 00:12:16.614 "config": [ 00:12:16.614 { 00:12:16.614 "params": { 00:12:16.614 "block_size": 512, 00:12:16.614 "num_blocks": 2097152, 00:12:16.614 "name": "malloc0" 00:12:16.614 }, 00:12:16.614 "method": "bdev_malloc_create" 00:12:16.614 }, 00:12:16.614 { 00:12:16.614 "params": { 00:12:16.614 "io_mechanism": "libaio", 00:12:16.614 "filename": "/dev/nullb0", 00:12:16.614 "name": "null0" 00:12:16.614 }, 00:12:16.614 "method": "bdev_xnvme_create" 00:12:16.614 }, 00:12:16.614 { 00:12:16.614 "method": "bdev_wait_for_examine" 00:12:16.614 } 00:12:16.614 ] 00:12:16.614 } 00:12:16.614 ] 00:12:16.614 } 00:12:16.614 [2024-10-01 06:04:42.133616] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:16.614 [2024-10-01 06:04:42.133741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80451 ] 00:12:16.874 [2024-10-01 06:04:42.266354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.874 [2024-10-01 06:04:42.324812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.708  Copying: 220/1024 [MB] (220 MBps) Copying: 463/1024 [MB] (243 MBps) Copying: 769/1024 [MB] (305 MBps) Copying: 1024/1024 [MB] (average 267 MBps) 00:12:21.708 00:12:21.708 06:04:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:21.708 06:04:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:21.708 06:04:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:21.708 06:04:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:21.708 06:04:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:21.708 06:04:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:21.708 { 00:12:21.708 "subsystems": [ 00:12:21.708 { 00:12:21.708 "subsystem": "bdev", 00:12:21.708 "config": [ 00:12:21.708 { 00:12:21.708 "params": { 00:12:21.708 "block_size": 512, 00:12:21.708 "num_blocks": 2097152, 00:12:21.708 "name": "malloc0" 00:12:21.708 }, 00:12:21.708 "method": "bdev_malloc_create" 00:12:21.708 }, 00:12:21.708 { 00:12:21.708 "params": { 00:12:21.708 "io_mechanism": "io_uring", 00:12:21.708 "filename": "/dev/nullb0", 00:12:21.708 "name": "null0" 00:12:21.708 }, 00:12:21.709 "method": "bdev_xnvme_create" 00:12:21.709 }, 00:12:21.709 { 00:12:21.709 "method": "bdev_wait_for_examine" 00:12:21.709 } 00:12:21.709 ] 00:12:21.709 } 00:12:21.709 ] 00:12:21.709 } 00:12:21.709 [2024-10-01 06:04:47.153005] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:21.709 [2024-10-01 06:04:47.153156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80516 ] 00:12:21.709 [2024-10-01 06:04:47.284878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.967 [2024-10-01 06:04:47.329156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.734  Copying: 310/1024 [MB] (310 MBps) Copying: 620/1024 [MB] (310 MBps) Copying: 929/1024 [MB] (309 MBps) Copying: 1024/1024 [MB] (average 309 MBps) 00:12:25.734 00:12:25.994 06:04:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:25.994 06:04:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:25.994 06:04:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:25.994 06:04:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:25.994 { 00:12:25.994 "subsystems": [ 00:12:25.994 { 00:12:25.994 "subsystem": "bdev", 00:12:25.994 "config": [ 00:12:25.994 { 00:12:25.994 "params": { 00:12:25.994 "block_size": 512, 00:12:25.994 "num_blocks": 2097152, 00:12:25.994 "name": "malloc0" 00:12:25.994 }, 00:12:25.994 "method": "bdev_malloc_create" 00:12:25.994 }, 00:12:25.994 { 00:12:25.994 "params": { 00:12:25.994 "io_mechanism": "io_uring", 00:12:25.994 "filename": "/dev/nullb0", 00:12:25.994 "name": "null0" 00:12:25.994 }, 00:12:25.994 "method": "bdev_xnvme_create" 00:12:25.994 }, 00:12:25.994 { 00:12:25.994 "method": "bdev_wait_for_examine" 00:12:25.994 } 00:12:25.994 ] 00:12:25.994 } 00:12:25.994 ] 00:12:25.994 } 00:12:25.994 [2024-10-01 06:04:51.416445] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:25.994 [2024-10-01 06:04:51.416588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80570 ] 00:12:25.994 [2024-10-01 06:04:51.551825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.994 [2024-10-01 06:04:51.604680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.276  Copying: 311/1024 [MB] (311 MBps) Copying: 624/1024 [MB] (312 MBps) Copying: 936/1024 [MB] (312 MBps) Copying: 1024/1024 [MB] (average 312 MBps) 00:12:30.276 00:12:30.276 06:04:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:30.276 06:04:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:30.276 00:12:30.276 real 0m18.006s 00:12:30.276 user 0m14.631s 00:12:30.276 sys 0m2.869s 00:12:30.276 06:04:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.276 ************************************ 00:12:30.276 06:04:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:30.276 END TEST xnvme_to_malloc_dd_copy 00:12:30.276 ************************************ 00:12:30.276 06:04:55 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:30.276 06:04:55 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:30.276 06:04:55 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.276 06:04:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.276 ************************************ 00:12:30.276 START TEST xnvme_bdevperf 00:12:30.276 ************************************ 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:30.276 06:04:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:30.276 { 00:12:30.276 "subsystems": [ 00:12:30.276 { 00:12:30.276 "subsystem": "bdev", 00:12:30.276 "config": [ 00:12:30.276 { 00:12:30.276 "params": { 00:12:30.276 "io_mechanism": "libaio", 00:12:30.276 "filename": "/dev/nullb0", 00:12:30.276 "name": "null0" 00:12:30.276 }, 00:12:30.276 "method": "bdev_xnvme_create" 00:12:30.276 }, 00:12:30.276 { 00:12:30.276 "method": "bdev_wait_for_examine" 00:12:30.276 } 00:12:30.276 ] 00:12:30.276 } 00:12:30.276 ] 00:12:30.276 } 00:12:30.276 [2024-10-01 06:04:55.789753] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:30.276 [2024-10-01 06:04:55.789856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80647 ] 00:12:30.536 [2024-10-01 06:04:55.916930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.536 [2024-10-01 06:04:55.961416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.536 Running I/O for 5 seconds... 00:12:35.641 206208.00 IOPS, 805.50 MiB/s 206528.00 IOPS, 806.75 MiB/s 206570.67 IOPS, 806.92 MiB/s 206496.00 IOPS, 806.62 MiB/s 00:12:35.641 Latency(us) 00:12:35.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.641 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:35.641 null0 : 5.00 206487.20 806.59 0.00 0.00 307.61 121.30 1518.67 00:12:35.641 =================================================================================================================== 00:12:35.641 Total : 206487.20 806.59 0.00 0.00 307.61 121.30 1518.67 00:12:35.641 06:05:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:35.641 06:05:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:35.642 06:05:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:35.642 06:05:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:35.642 06:05:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:35.642 06:05:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:35.899 { 00:12:35.899 "subsystems": [ 00:12:35.899 { 00:12:35.899 "subsystem": "bdev", 00:12:35.899 "config": [ 00:12:35.899 { 00:12:35.899 "params": { 00:12:35.899 "io_mechanism": "io_uring", 00:12:35.899 "filename": "/dev/nullb0", 00:12:35.899 "name": "null0" 00:12:35.899 }, 00:12:35.899 "method": "bdev_xnvme_create" 00:12:35.899 }, 00:12:35.899 { 00:12:35.899 "method": "bdev_wait_for_examine" 00:12:35.899 } 00:12:35.899 ] 00:12:35.899 } 00:12:35.899 ] 00:12:35.899 } 00:12:35.899 [2024-10-01 06:05:01.310906] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:35.899 [2024-10-01 06:05:01.311027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80716 ] 00:12:35.899 [2024-10-01 06:05:01.447359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.899 [2024-10-01 06:05:01.491606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.156 Running I/O for 5 seconds... 00:12:41.336 235200.00 IOPS, 918.75 MiB/s 235104.00 IOPS, 918.38 MiB/s 235114.67 IOPS, 918.42 MiB/s 235040.00 IOPS, 918.12 MiB/s 00:12:41.336 Latency(us) 00:12:41.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.336 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:41.336 null0 : 5.00 235005.54 917.99 0.00 0.00 270.26 148.09 1493.46 00:12:41.336 =================================================================================================================== 00:12:41.336 Total : 235005.54 917.99 0.00 0.00 270.26 148.09 1493.46 00:12:41.336 06:05:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:41.336 06:05:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:41.336 00:12:41.336 real 0m11.083s 00:12:41.336 user 0m8.684s 00:12:41.336 sys 0m2.167s 00:12:41.336 06:05:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.336 ************************************ 00:12:41.336 END TEST xnvme_bdevperf 00:12:41.336 ************************************ 00:12:41.336 06:05:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 00:12:41.336 real 0m29.360s 00:12:41.336 user 0m23.433s 00:12:41.336 sys 0m5.162s 00:12:41.336 ************************************ 00:12:41.336 END TEST nvme_xnvme 00:12:41.336 ************************************ 00:12:41.336 06:05:06 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.336 06:05:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 06:05:06 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:41.336 06:05:06 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:41.336 06:05:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.336 06:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:41.336 ************************************ 00:12:41.336 START TEST blockdev_xnvme 00:12:41.336 ************************************ 00:12:41.336 06:05:06 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:41.336 * Looking for test storage... 00:12:41.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:41.599 06:05:06 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:41.599 06:05:06 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:41.599 06:05:06 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:41.599 06:05:07 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.599 06:05:07 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:41.599 06:05:07 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.599 06:05:07 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.599 --rc genhtml_branch_coverage=1 00:12:41.599 --rc genhtml_function_coverage=1 00:12:41.599 --rc genhtml_legend=1 00:12:41.599 --rc geninfo_all_blocks=1 00:12:41.599 --rc geninfo_unexecuted_blocks=1 00:12:41.599 00:12:41.599 ' 00:12:41.599 06:05:07 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.599 --rc genhtml_branch_coverage=1 00:12:41.599 --rc genhtml_function_coverage=1 00:12:41.599 --rc genhtml_legend=1 00:12:41.599 --rc geninfo_all_blocks=1 00:12:41.599 --rc geninfo_unexecuted_blocks=1 00:12:41.599 00:12:41.599 ' 00:12:41.599 06:05:07 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:41.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.599 --rc genhtml_branch_coverage=1 00:12:41.599 --rc genhtml_function_coverage=1 00:12:41.599 --rc genhtml_legend=1 00:12:41.599 --rc geninfo_all_blocks=1 00:12:41.600 --rc geninfo_unexecuted_blocks=1 00:12:41.600 00:12:41.600 ' 00:12:41.600 06:05:07 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:41.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.600 --rc genhtml_branch_coverage=1 00:12:41.600 --rc genhtml_function_coverage=1 00:12:41.600 --rc genhtml_legend=1 00:12:41.600 --rc geninfo_all_blocks=1 00:12:41.600 --rc geninfo_unexecuted_blocks=1 00:12:41.600 00:12:41.600 ' 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80852 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80852 00:12:41.600 06:05:07 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 80852 ']' 00:12:41.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.600 06:05:07 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.600 06:05:07 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.600 06:05:07 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.600 06:05:07 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.600 06:05:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:41.600 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:41.600 [2024-10-01 06:05:07.118257] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:41.600 [2024-10-01 06:05:07.118380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80852 ] 00:12:41.859 [2024-10-01 06:05:07.253088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.859 [2024-10-01 06:05:07.313395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.426 06:05:07 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:42.426 06:05:07 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:42.426 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:42.426 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:42.426 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:42.426 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:42.426 06:05:07 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:42.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:42.943 Waiting for block devices as requested 00:12:42.943 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:42.943 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:43.203 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:43.203 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:48.524 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:48.524 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:48.524 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:48.525 nvme0n1 00:12:48.525 nvme1n1 00:12:48.525 nvme2n1 00:12:48.525 nvme2n2 00:12:48.525 nvme2n3 00:12:48.525 nvme3n1 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.525 06:05:13 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:48.525 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:48.526 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3f3ab6e4-7bfc-4484-b689-036067150f30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3f3ab6e4-7bfc-4484-b689-036067150f30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2a7d47c2-3062-4b69-b374-5f92295b35ce"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2a7d47c2-3062-4b69-b374-5f92295b35ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "91d1d8a4-4bf1-4fe8-b5f0-4112d9878839"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91d1d8a4-4bf1-4fe8-b5f0-4112d9878839",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "eefb7075-df42-41d6-b62f-61070d284f30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eefb7075-df42-41d6-b62f-61070d284f30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "dbcf2fbb-1a19-417d-8b44-56b0f9f8e345"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dbcf2fbb-1a19-417d-8b44-56b0f9f8e345",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "caa20645-46e0-466e-9ddf-4f99833a5fe5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "caa20645-46e0-466e-9ddf-4f99833a5fe5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:48.526 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:48.526 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:48.526 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:48.526 06:05:13 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80852 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 80852 ']' 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 80852 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80852 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:48.526 killing process with pid 80852 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80852' 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 80852 00:12:48.526 06:05:13 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 80852 00:12:48.785 06:05:14 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:48.785 06:05:14 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:48.785 06:05:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:48.785 06:05:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.785 06:05:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.785 ************************************ 00:12:48.785 START TEST bdev_hello_world 00:12:48.785 ************************************ 00:12:48.786 06:05:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:48.786 [2024-10-01 06:05:14.376765] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:48.786 [2024-10-01 06:05:14.376887] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81195 ] 00:12:49.045 [2024-10-01 06:05:14.509615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.045 [2024-10-01 06:05:14.551630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.304 [2024-10-01 06:05:14.726862] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:49.304 [2024-10-01 06:05:14.726900] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:49.304 [2024-10-01 06:05:14.726918] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:49.304 [2024-10-01 06:05:14.728556] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:49.304 [2024-10-01 06:05:14.729009] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:49.304 [2024-10-01 06:05:14.729031] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:49.305 [2024-10-01 06:05:14.729263] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:49.305 00:12:49.305 [2024-10-01 06:05:14.729277] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:49.305 00:12:49.305 real 0m0.582s 00:12:49.305 user 0m0.305s 00:12:49.305 sys 0m0.165s 00:12:49.305 06:05:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.305 ************************************ 00:12:49.305 END TEST bdev_hello_world 00:12:49.305 ************************************ 00:12:49.305 06:05:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:49.564 06:05:14 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:49.564 06:05:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:49.564 06:05:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.564 06:05:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.564 ************************************ 00:12:49.564 START TEST bdev_bounds 00:12:49.564 ************************************ 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81226 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:49.564 Process bdevio pid: 81226 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81226' 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81226 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81226 ']' 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:49.564 06:05:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:49.564 [2024-10-01 06:05:15.018023] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:49.564 [2024-10-01 06:05:15.018147] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81226 ] 00:12:49.564 [2024-10-01 06:05:15.150988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:49.823 [2024-10-01 06:05:15.197226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.823 [2024-10-01 06:05:15.197357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.823 [2024-10-01 06:05:15.197436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.390 06:05:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.390 06:05:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:50.390 06:05:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:50.390 I/O targets: 00:12:50.390 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:50.390 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:50.390 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:50.390 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:50.390 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:50.390 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:50.390 00:12:50.390 00:12:50.390 CUnit - A unit testing framework for C - Version 2.1-3 00:12:50.390 http://cunit.sourceforge.net/ 00:12:50.390 00:12:50.390 00:12:50.390 Suite: bdevio tests on: nvme3n1 00:12:50.390 Test: blockdev write read block ...passed 00:12:50.390 Test: blockdev write zeroes read block ...passed 00:12:50.390 Test: blockdev write zeroes read no split ...passed 00:12:50.390 Test: blockdev write zeroes read split ...passed 00:12:50.390 Test: blockdev write zeroes read split partial ...passed 00:12:50.390 Test: blockdev reset ...passed 00:12:50.390 Test: blockdev write read 8 blocks ...passed 00:12:50.390 Test: blockdev write read size > 128k ...passed 00:12:50.390 Test: blockdev write read invalid size ...passed 00:12:50.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:50.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:50.390 Test: blockdev write read max offset ...passed 00:12:50.390 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:50.390 Test: blockdev writev readv 8 blocks ...passed 00:12:50.390 Test: blockdev writev readv 30 x 1block ...passed 00:12:50.390 Test: blockdev writev readv block ...passed 00:12:50.390 Test: blockdev writev readv size > 128k ...passed 00:12:50.390 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:50.390 Test: blockdev comparev and writev ...passed 00:12:50.390 Test: blockdev nvme passthru rw ...passed 00:12:50.390 Test: blockdev nvme passthru vendor specific ...passed 00:12:50.390 Test: blockdev nvme admin passthru ...passed 00:12:50.390 Test: blockdev copy ...passed 00:12:50.390 Suite: bdevio tests on: nvme2n3 00:12:50.390 Test: blockdev write read block ...passed 00:12:50.390 Test: blockdev write zeroes read block ...passed 00:12:50.390 Test: blockdev write zeroes read no split ...passed 00:12:50.649 Test: blockdev write zeroes read split ...passed 00:12:50.649 Test: blockdev write zeroes read split partial ...passed 00:12:50.649 Test: blockdev reset ...passed 00:12:50.649 Test: blockdev write read 8 blocks ...passed 00:12:50.649 Test: blockdev write read size > 128k ...passed 00:12:50.649 Test: blockdev write read invalid size ...passed 00:12:50.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:50.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:50.649 Test: blockdev write read max offset ...passed 00:12:50.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:50.649 Test: blockdev writev readv 8 blocks ...passed 00:12:50.649 Test: blockdev writev readv 30 x 1block ...passed 00:12:50.649 Test: blockdev writev readv block ...passed 00:12:50.649 Test: blockdev writev readv size > 128k ...passed 00:12:50.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:50.649 Test: blockdev comparev and writev ...passed 00:12:50.649 Test: blockdev nvme passthru rw ...passed 00:12:50.649 Test: blockdev nvme passthru vendor specific ...passed 00:12:50.649 Test: blockdev nvme admin passthru ...passed 00:12:50.649 Test: blockdev copy ...passed 00:12:50.649 Suite: bdevio tests on: nvme2n2 00:12:50.649 Test: blockdev write read block ...passed 00:12:50.649 Test: blockdev write zeroes read block ...passed 00:12:50.649 Test: blockdev write zeroes read no split ...passed 00:12:50.649 Test: blockdev write zeroes read split ...passed 00:12:50.649 Test: blockdev write zeroes read split partial ...passed 00:12:50.649 Test: blockdev reset ...passed 00:12:50.649 Test: blockdev write read 8 blocks ...passed 00:12:50.649 Test: blockdev write read size > 128k ...passed 00:12:50.649 Test: blockdev write read invalid size ...passed 00:12:50.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:50.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:50.649 Test: blockdev write read max offset ...passed 00:12:50.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:50.649 Test: blockdev writev readv 8 blocks ...passed 00:12:50.649 Test: blockdev writev readv 30 x 1block ...passed 00:12:50.649 Test: blockdev writev readv block ...passed 00:12:50.649 Test: blockdev writev readv size > 128k ...passed 00:12:50.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:50.649 Test: blockdev comparev and writev ...passed 00:12:50.649 Test: blockdev nvme passthru rw ...passed 00:12:50.649 Test: blockdev nvme passthru vendor specific ...passed 00:12:50.649 Test: blockdev nvme admin passthru ...passed 00:12:50.649 Test: blockdev copy ...passed 00:12:50.649 Suite: bdevio tests on: nvme2n1 00:12:50.649 Test: blockdev write read block ...passed 00:12:50.649 Test: blockdev write zeroes read block ...passed 00:12:50.649 Test: blockdev write zeroes read no split ...passed 00:12:50.649 Test: blockdev write zeroes read split ...passed 00:12:50.649 Test: blockdev write zeroes read split partial ...passed 00:12:50.649 Test: blockdev reset ...passed 00:12:50.649 Test: blockdev write read 8 blocks ...passed 00:12:50.649 Test: blockdev write read size > 128k ...passed 00:12:50.649 Test: blockdev write read invalid size ...passed 00:12:50.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:50.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:50.649 Test: blockdev write read max offset ...passed 00:12:50.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:50.649 Test: blockdev writev readv 8 blocks ...passed 00:12:50.650 Test: blockdev writev readv 30 x 1block ...passed 00:12:50.650 Test: blockdev writev readv block ...passed 00:12:50.650 Test: blockdev writev readv size > 128k ...passed 00:12:50.650 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:50.650 Test: blockdev comparev and writev ...passed 00:12:50.650 Test: blockdev nvme passthru rw ...passed 00:12:50.650 Test: blockdev nvme passthru vendor specific ...passed 00:12:50.650 Test: blockdev nvme admin passthru ...passed 00:12:50.650 Test: blockdev copy ...passed 00:12:50.650 Suite: bdevio tests on: nvme1n1 00:12:50.650 Test: blockdev write read block ...passed 00:12:50.650 Test: blockdev write zeroes read block ...passed 00:12:50.650 Test: blockdev write zeroes read no split ...passed 00:12:50.650 Test: blockdev write zeroes read split ...passed 00:12:50.650 Test: blockdev write zeroes read split partial ...passed 00:12:50.650 Test: blockdev reset ...passed 00:12:50.650 Test: blockdev write read 8 blocks ...passed 00:12:50.650 Test: blockdev write read size > 128k ...passed 00:12:50.650 Test: blockdev write read invalid size ...passed 00:12:50.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:50.650 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:50.650 Test: blockdev write read max offset ...passed 00:12:50.650 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:50.650 Test: blockdev writev readv 8 blocks ...passed 00:12:50.650 Test: blockdev writev readv 30 x 1block ...passed 00:12:50.650 Test: blockdev writev readv block ...passed 00:12:50.650 Test: blockdev writev readv size > 128k ...passed 00:12:50.650 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:50.650 Test: blockdev comparev and writev ...passed 00:12:50.650 Test: blockdev nvme passthru rw ...passed 00:12:50.650 Test: blockdev nvme passthru vendor specific ...passed 00:12:50.650 Test: blockdev nvme admin passthru ...passed 00:12:50.650 Test: blockdev copy ...passed 00:12:50.650 Suite: bdevio tests on: nvme0n1 00:12:50.650 Test: blockdev write read block ...passed 00:12:50.650 Test: blockdev write zeroes read block ...passed 00:12:50.650 Test: blockdev write zeroes read no split ...passed 00:12:50.650 Test: blockdev write zeroes read split ...passed 00:12:50.650 Test: blockdev write zeroes read split partial ...passed 00:12:50.650 Test: blockdev reset ...passed 00:12:50.650 Test: blockdev write read 8 blocks ...passed 00:12:50.650 Test: blockdev write read size > 128k ...passed 00:12:50.650 Test: blockdev write read invalid size ...passed 00:12:50.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:50.650 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:50.650 Test: blockdev write read max offset ...passed 00:12:50.650 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:50.650 Test: blockdev writev readv 8 blocks ...passed 00:12:50.650 Test: blockdev writev readv 30 x 1block ...passed 00:12:50.650 Test: blockdev writev readv block ...passed 00:12:50.650 Test: blockdev writev readv size > 128k ...passed 00:12:50.650 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:50.650 Test: blockdev comparev and writev ...passed 00:12:50.650 Test: blockdev nvme passthru rw ...passed 00:12:50.650 Test: blockdev nvme passthru vendor specific ...passed 00:12:50.650 Test: blockdev nvme admin passthru ...passed 00:12:50.650 Test: blockdev copy ...passed 00:12:50.650 00:12:50.650 Run Summary: Type Total Ran Passed Failed Inactive 00:12:50.650 suites 6 6 n/a 0 0 00:12:50.650 tests 138 138 138 0 0 00:12:50.650 asserts 780 780 780 0 n/a 00:12:50.650 00:12:50.650 Elapsed time = 0.465 seconds 00:12:50.650 0 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81226 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81226 ']' 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81226 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81226 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.650 killing process with pid 81226 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81226' 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81226 00:12:50.650 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81226 00:12:50.911 06:05:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:50.911 00:12:50.911 real 0m1.433s 00:12:50.911 user 0m3.570s 00:12:50.911 sys 0m0.284s 00:12:50.911 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.911 ************************************ 00:12:50.911 END TEST bdev_bounds 00:12:50.911 ************************************ 00:12:50.911 06:05:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:50.911 06:05:16 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:50.911 06:05:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:50.911 06:05:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.911 06:05:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.911 ************************************ 00:12:50.911 START TEST bdev_nbd 00:12:50.911 ************************************ 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81269 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81269 /var/tmp/spdk-nbd.sock 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81269 ']' 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:50.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.911 06:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:50.911 [2024-10-01 06:05:16.519119] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:12:50.911 [2024-10-01 06:05:16.519258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.171 [2024-10-01 06:05:16.652172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.171 [2024-10-01 06:05:16.692619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.109 1+0 records in 00:12:52.109 1+0 records out 00:12:52.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140381 s, 2.9 MB/s 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:52.109 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.370 1+0 records in 00:12:52.370 1+0 records out 00:12:52.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138309 s, 3.0 MB/s 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:52.370 06:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.631 1+0 records in 00:12:52.631 1+0 records out 00:12:52.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000987506 s, 4.1 MB/s 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:52.631 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.892 1+0 records in 00:12:52.892 1+0 records out 00:12:52.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114624 s, 3.6 MB/s 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:52.892 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.153 1+0 records in 00:12:53.153 1+0 records out 00:12:53.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937684 s, 4.4 MB/s 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:53.153 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.415 1+0 records in 00:12:53.415 1+0 records out 00:12:53.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105208 s, 3.9 MB/s 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.415 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:53.416 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:53.416 06:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd0", 00:12:53.677 "bdev_name": "nvme0n1" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd1", 00:12:53.677 "bdev_name": "nvme1n1" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd2", 00:12:53.677 "bdev_name": "nvme2n1" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd3", 00:12:53.677 "bdev_name": "nvme2n2" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd4", 00:12:53.677 "bdev_name": "nvme2n3" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd5", 00:12:53.677 "bdev_name": "nvme3n1" 00:12:53.677 } 00:12:53.677 ]' 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd0", 00:12:53.677 "bdev_name": "nvme0n1" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd1", 00:12:53.677 "bdev_name": "nvme1n1" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd2", 00:12:53.677 "bdev_name": "nvme2n1" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd3", 00:12:53.677 "bdev_name": "nvme2n2" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd4", 00:12:53.677 "bdev_name": "nvme2n3" 00:12:53.677 }, 00:12:53.677 { 00:12:53.677 "nbd_device": "/dev/nbd5", 00:12:53.677 "bdev_name": "nvme3n1" 00:12:53.677 } 00:12:53.677 ]' 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.677 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.938 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.200 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.462 06:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.462 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.722 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.980 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.237 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.238 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:55.238 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.238 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:55.238 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:55.238 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:55.238 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:55.238 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:55.494 /dev/nbd0 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.494 1+0 records in 00:12:55.494 1+0 records out 00:12:55.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741732 s, 5.5 MB/s 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:55.494 06:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:55.753 /dev/nbd1 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.753 1+0 records in 00:12:55.753 1+0 records out 00:12:55.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771748 s, 5.3 MB/s 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:55.753 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:56.011 /dev/nbd10 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.011 1+0 records in 00:12:56.011 1+0 records out 00:12:56.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110184 s, 3.7 MB/s 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:56.011 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:56.011 /dev/nbd11 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.269 1+0 records in 00:12:56.269 1+0 records out 00:12:56.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000777477 s, 5.3 MB/s 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:56.269 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:56.269 /dev/nbd12 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.527 1+0 records in 00:12:56.527 1+0 records out 00:12:56.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000777307 s, 5.3 MB/s 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:56.527 06:05:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:56.527 /dev/nbd13 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.527 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.527 1+0 records in 00:12:56.527 1+0 records out 00:12:56.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111071 s, 3.7 MB/s 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd0", 00:12:56.785 "bdev_name": "nvme0n1" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd1", 00:12:56.785 "bdev_name": "nvme1n1" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd10", 00:12:56.785 "bdev_name": "nvme2n1" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd11", 00:12:56.785 "bdev_name": "nvme2n2" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd12", 00:12:56.785 "bdev_name": "nvme2n3" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd13", 00:12:56.785 "bdev_name": "nvme3n1" 00:12:56.785 } 00:12:56.785 ]' 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd0", 00:12:56.785 "bdev_name": "nvme0n1" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd1", 00:12:56.785 "bdev_name": "nvme1n1" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd10", 00:12:56.785 "bdev_name": "nvme2n1" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd11", 00:12:56.785 "bdev_name": "nvme2n2" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd12", 00:12:56.785 "bdev_name": "nvme2n3" 00:12:56.785 }, 00:12:56.785 { 00:12:56.785 "nbd_device": "/dev/nbd13", 00:12:56.785 "bdev_name": "nvme3n1" 00:12:56.785 } 00:12:56.785 ]' 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:56.785 /dev/nbd1 00:12:56.785 /dev/nbd10 00:12:56.785 /dev/nbd11 00:12:56.785 /dev/nbd12 00:12:56.785 /dev/nbd13' 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:56.785 /dev/nbd1 00:12:56.785 /dev/nbd10 00:12:56.785 /dev/nbd11 00:12:56.785 /dev/nbd12 00:12:56.785 /dev/nbd13' 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:56.785 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:57.045 256+0 records in 00:12:57.045 256+0 records out 00:12:57.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720405 s, 146 MB/s 00:12:57.045 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.045 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:57.045 256+0 records in 00:12:57.045 256+0 records out 00:12:57.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131124 s, 8.0 MB/s 00:12:57.045 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.045 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:57.307 256+0 records in 00:12:57.307 256+0 records out 00:12:57.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.287151 s, 3.7 MB/s 00:12:57.307 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.307 06:05:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:57.568 256+0 records in 00:12:57.568 256+0 records out 00:12:57.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237164 s, 4.4 MB/s 00:12:57.568 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.568 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:57.827 256+0 records in 00:12:57.827 256+0 records out 00:12:57.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204739 s, 5.1 MB/s 00:12:57.827 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.827 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:58.085 256+0 records in 00:12:58.085 256+0 records out 00:12:58.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17718 s, 5.9 MB/s 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:58.085 256+0 records in 00:12:58.085 256+0 records out 00:12:58.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107213 s, 9.8 MB/s 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.085 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.343 06:05:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.601 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:58.859 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.860 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.118 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.376 06:05:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:59.633 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:59.921 malloc_lvol_verify 00:12:59.921 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:59.921 d6eae9cb-828b-48ad-92dd-f3486eb2db96 00:12:59.921 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:00.179 d78225a0-e265-4d9b-82e2-69beb1cbb7bb 00:13:00.179 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:00.437 /dev/nbd0 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:00.437 mke2fs 1.47.0 (5-Feb-2023) 00:13:00.437 Discarding device blocks: 0/4096 done 00:13:00.437 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:00.437 00:13:00.437 Allocating group tables: 0/1 done 00:13:00.437 Writing inode tables: 0/1 done 00:13:00.437 Creating journal (1024 blocks): done 00:13:00.437 Writing superblocks and filesystem accounting information: 0/1 done 00:13:00.437 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.437 06:05:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81269 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81269 ']' 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81269 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81269 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.695 killing process with pid 81269 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81269' 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81269 00:13:00.695 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81269 00:13:00.956 06:05:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:00.956 00:13:00.956 real 0m9.940s 00:13:00.956 user 0m13.750s 00:13:00.956 sys 0m3.567s 00:13:00.956 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.956 ************************************ 00:13:00.956 END TEST bdev_nbd 00:13:00.956 ************************************ 00:13:00.956 06:05:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:00.956 06:05:26 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:00.956 06:05:26 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:00.956 06:05:26 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:00.956 06:05:26 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:00.956 06:05:26 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:00.956 06:05:26 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.956 06:05:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.956 ************************************ 00:13:00.956 START TEST bdev_fio 00:13:00.956 ************************************ 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:00.956 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:00.956 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:00.957 ************************************ 00:13:00.957 START TEST bdev_fio_rw_verify 00:13:00.957 ************************************ 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:00.957 06:05:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.218 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.218 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.218 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.218 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.218 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.218 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.218 fio-3.35 00:13:01.218 Starting 6 threads 00:13:13.461 00:13:13.461 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81681: Tue Oct 1 06:05:38 2024 00:13:13.461 read: IOPS=13.1k, BW=51.3MiB/s (53.8MB/s)(513MiB/10002msec) 00:13:13.461 slat (usec): min=2, max=2954, avg= 7.52, stdev=18.57 00:13:13.461 clat (usec): min=95, max=32317, avg=1493.21, stdev=833.10 00:13:13.461 lat (usec): min=99, max=32321, avg=1500.73, stdev=833.82 00:13:13.461 clat percentiles (usec): 00:13:13.461 | 50.000th=[ 1401], 99.000th=[ 3949], 99.900th=[ 5342], 99.990th=[ 7635], 00:13:13.461 | 99.999th=[32375] 00:13:13.461 write: IOPS=13.5k, BW=52.8MiB/s (55.4MB/s)(528MiB/10002msec); 0 zone resets 00:13:13.461 slat (usec): min=10, max=3869, avg=43.76, stdev=147.27 00:13:13.461 clat (usec): min=79, max=8690, avg=1759.24, stdev=875.43 00:13:13.461 lat (usec): min=96, max=8711, avg=1803.01, stdev=886.98 00:13:13.461 clat percentiles (usec): 00:13:13.461 | 50.000th=[ 1631], 99.000th=[ 4424], 99.900th=[ 5932], 99.990th=[ 7242], 00:13:13.461 | 99.999th=[ 7767] 00:13:13.461 bw ( KiB/s): min=45037, max=92192, per=100.00%, avg=54200.47, stdev=1776.58, samples=114 00:13:13.461 iops : min=11257, max=23048, avg=13548.16, stdev=444.24, samples=114 00:13:13.461 lat (usec) : 100=0.01%, 250=1.45%, 500=4.91%, 750=7.11%, 1000=9.78% 00:13:13.461 lat (msec) : 2=48.84%, 4=26.48%, 10=1.42%, 50=0.01% 00:13:13.461 cpu : usr=45.89%, sys=31.33%, ctx=5365, majf=0, minf=15648 00:13:13.461 IO depths : 1=11.2%, 2=23.6%, 4=51.4%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.461 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.461 issued rwts: total=131322,135203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.461 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:13.461 00:13:13.461 Run status group 0 (all jobs): 00:13:13.461 READ: bw=51.3MiB/s (53.8MB/s), 51.3MiB/s-51.3MiB/s (53.8MB/s-53.8MB/s), io=513MiB (538MB), run=10002-10002msec 00:13:13.461 WRITE: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=528MiB (554MB), run=10002-10002msec 00:13:13.461 ----------------------------------------------------- 00:13:13.461 Suppressions used: 00:13:13.461 count bytes template 00:13:13.461 6 48 /usr/src/fio/parse.c 00:13:13.461 3794 364224 /usr/src/fio/iolog.c 00:13:13.461 1 8 libtcmalloc_minimal.so 00:13:13.461 1 904 libcrypto.so 00:13:13.461 ----------------------------------------------------- 00:13:13.461 00:13:13.461 00:13:13.461 real 0m12.291s 00:13:13.461 user 0m28.285s 00:13:13.461 sys 0m19.168s 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:13.461 ************************************ 00:13:13.461 END TEST bdev_fio_rw_verify 00:13:13.461 ************************************ 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3f3ab6e4-7bfc-4484-b689-036067150f30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3f3ab6e4-7bfc-4484-b689-036067150f30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2a7d47c2-3062-4b69-b374-5f92295b35ce"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2a7d47c2-3062-4b69-b374-5f92295b35ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "91d1d8a4-4bf1-4fe8-b5f0-4112d9878839"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91d1d8a4-4bf1-4fe8-b5f0-4112d9878839",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "eefb7075-df42-41d6-b62f-61070d284f30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eefb7075-df42-41d6-b62f-61070d284f30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "dbcf2fbb-1a19-417d-8b44-56b0f9f8e345"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dbcf2fbb-1a19-417d-8b44-56b0f9f8e345",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "caa20645-46e0-466e-9ddf-4f99833a5fe5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "caa20645-46e0-466e-9ddf-4f99833a5fe5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:13.461 /home/vagrant/spdk_repo/spdk 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:13.461 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:13.462 06:05:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:13.462 00:13:13.462 real 0m12.474s 00:13:13.462 user 0m28.352s 00:13:13.462 sys 0m19.255s 00:13:13.462 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.462 ************************************ 00:13:13.462 END TEST bdev_fio 00:13:13.462 ************************************ 00:13:13.462 06:05:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:13.462 06:05:38 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:13.462 06:05:38 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:13.462 06:05:38 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:13.462 06:05:38 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.462 06:05:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.462 ************************************ 00:13:13.462 START TEST bdev_verify 00:13:13.462 ************************************ 00:13:13.462 06:05:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:13.723 [2024-10-01 06:05:39.082193] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:13.723 [2024-10-01 06:05:39.082332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81844 ] 00:13:13.723 [2024-10-01 06:05:39.221074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:13.723 [2024-10-01 06:05:39.294706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.723 [2024-10-01 06:05:39.294788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.295 Running I/O for 5 seconds... 00:13:19.155 21084.00 IOPS, 82.36 MiB/s 22157.00 IOPS, 86.55 MiB/s 22341.00 IOPS, 87.27 MiB/s 22776.50 IOPS, 88.97 MiB/s 22885.00 IOPS, 89.39 MiB/s 00:13:19.155 Latency(us) 00:13:19.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.155 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x0 length 0xa0000 00:13:19.155 nvme0n1 : 5.05 2028.62 7.92 0.00 0.00 62987.52 4436.28 66544.25 00:13:19.155 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0xa0000 length 0xa0000 00:13:19.155 nvme0n1 : 5.06 1794.83 7.01 0.00 0.00 71175.81 7007.31 78643.20 00:13:19.155 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x0 length 0xbd0bd 00:13:19.155 nvme1n1 : 5.07 2131.98 8.33 0.00 0.00 59542.63 4411.08 99211.42 00:13:19.155 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:19.155 nvme1n1 : 5.07 1646.72 6.43 0.00 0.00 77195.37 3780.92 141154.46 00:13:19.155 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x0 length 0x80000 00:13:19.155 nvme2n1 : 5.03 1986.48 7.76 0.00 0.00 64091.86 8771.74 68964.04 00:13:19.155 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x80000 length 0x80000 00:13:19.155 nvme2n1 : 5.03 1731.64 6.76 0.00 0.00 73459.88 7813.91 73400.32 00:13:19.155 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x0 length 0x80000 00:13:19.155 nvme2n2 : 5.06 1973.33 7.71 0.00 0.00 64329.95 9074.22 72593.72 00:13:19.155 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x80000 length 0x80000 00:13:19.155 nvme2n2 : 5.07 1791.47 7.00 0.00 0.00 70861.97 8116.38 68157.44 00:13:19.155 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x0 length 0x80000 00:13:19.155 nvme2n3 : 5.06 1997.17 7.80 0.00 0.00 63461.90 6906.49 65737.65 00:13:19.155 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x80000 length 0x80000 00:13:19.155 nvme2n3 : 5.08 1789.64 6.99 0.00 0.00 70769.23 10889.06 77836.60 00:13:19.155 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x0 length 0x20000 00:13:19.155 nvme3n1 : 5.06 1974.23 7.71 0.00 0.00 64093.45 5948.65 72190.42 00:13:19.155 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.155 Verification LBA range: start 0x20000 length 0x20000 00:13:19.155 nvme3n1 : 5.08 1837.75 7.18 0.00 0.00 68789.42 4562.31 77433.30 00:13:19.155 =================================================================================================================== 00:13:19.155 Total : 22683.85 88.61 0.00 0.00 67203.12 3780.92 141154.46 00:13:19.730 00:13:19.730 real 0m6.035s 00:13:19.730 user 0m9.315s 00:13:19.730 sys 0m1.621s 00:13:19.730 06:05:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.730 06:05:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:19.730 ************************************ 00:13:19.730 END TEST bdev_verify 00:13:19.730 ************************************ 00:13:19.730 06:05:45 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:19.730 06:05:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:19.730 06:05:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.730 06:05:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.730 ************************************ 00:13:19.730 START TEST bdev_verify_big_io 00:13:19.730 ************************************ 00:13:19.730 06:05:45 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:19.730 [2024-10-01 06:05:45.182004] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:19.731 [2024-10-01 06:05:45.182146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81937 ] 00:13:19.731 [2024-10-01 06:05:45.317227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.992 [2024-10-01 06:05:45.388547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.992 [2024-10-01 06:05:45.388633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.254 Running I/O for 5 seconds... 00:13:26.915 1680.00 IOPS, 105.00 MiB/s 1872.00 IOPS, 117.00 MiB/s 2466.33 IOPS, 154.15 MiB/s 00:13:26.915 Latency(us) 00:13:26.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.915 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0x0 length 0xa000 00:13:26.915 nvme0n1 : 5.94 78.12 4.88 0.00 0.00 1613478.72 18753.38 3290915.45 00:13:26.915 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0xa000 length 0xa000 00:13:26.915 nvme0n1 : 5.97 61.63 3.85 0.00 0.00 1945928.23 229073.53 2658543.46 00:13:26.915 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0x0 length 0xbd0b 00:13:26.915 nvme1n1 : 5.95 107.56 6.72 0.00 0.00 1117353.35 13510.50 1051802.39 00:13:26.915 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:26.915 nvme1n1 : 6.01 87.82 5.49 0.00 0.00 1330424.46 5066.44 1819682.66 00:13:26.915 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0x0 length 0x8000 00:13:26.915 nvme2n1 : 5.94 129.37 8.09 0.00 0.00 897379.51 173418.34 1529307.77 00:13:26.915 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0x8000 length 0x8000 00:13:26.915 nvme2n1 : 6.02 82.46 5.15 0.00 0.00 1341995.96 67754.14 1490591.11 00:13:26.915 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0x0 length 0x8000 00:13:26.915 nvme2n2 : 5.94 121.15 7.57 0.00 0.00 955215.35 15022.87 1155046.79 00:13:26.915 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0x8000 length 0x8000 00:13:26.915 nvme2n2 : 6.04 84.72 5.30 0.00 0.00 1233152.79 241979.08 903388.55 00:13:26.915 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:26.915 Verification LBA range: start 0x0 length 0x8000 00:13:26.915 nvme2n3 : 5.95 95.71 5.98 0.00 0.00 1177617.29 7410.61 2826315.62 00:13:26.916 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:26.916 Verification LBA range: start 0x8000 length 0x8000 00:13:26.916 nvme2n3 : 6.48 121.50 7.59 0.00 0.00 808919.28 586.04 1897115.96 00:13:26.916 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:26.916 Verification LBA range: start 0x0 length 0x2000 00:13:26.916 nvme3n1 : 5.95 172.14 10.76 0.00 0.00 634706.34 7360.20 1109877.37 00:13:26.916 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:26.916 Verification LBA range: start 0x2000 length 0x2000 00:13:26.916 nvme3n1 : 6.56 295.20 18.45 0.00 0.00 323726.03 1676.21 2516582.40 00:13:26.916 =================================================================================================================== 00:13:26.916 Total : 1437.39 89.84 0.00 0.00 923698.29 586.04 3290915.45 00:13:26.916 00:13:26.916 real 0m7.379s 00:13:26.916 user 0m13.618s 00:13:26.916 sys 0m0.473s 00:13:26.916 06:05:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.916 ************************************ 00:13:26.916 END TEST bdev_verify_big_io 00:13:26.916 ************************************ 00:13:26.916 06:05:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.177 06:05:52 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:27.177 06:05:52 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:27.177 06:05:52 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.177 06:05:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.177 ************************************ 00:13:27.177 START TEST bdev_write_zeroes 00:13:27.177 ************************************ 00:13:27.177 06:05:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:27.177 [2024-10-01 06:05:52.611192] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:27.177 [2024-10-01 06:05:52.611314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82047 ] 00:13:27.177 [2024-10-01 06:05:52.747930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.439 [2024-10-01 06:05:52.814362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.699 Running I/O for 1 seconds... 00:13:28.644 73920.00 IOPS, 288.75 MiB/s 00:13:28.644 Latency(us) 00:13:28.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.644 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:28.644 nvme0n1 : 1.02 12072.27 47.16 0.00 0.00 10593.63 5520.15 22282.24 00:13:28.644 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:28.644 nvme1n1 : 1.02 13438.72 52.50 0.00 0.00 9507.89 3012.14 17241.01 00:13:28.644 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:28.644 nvme2n1 : 1.03 12084.40 47.20 0.00 0.00 10496.85 1865.26 26416.05 00:13:28.644 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:28.644 nvme2n2 : 1.03 11945.88 46.66 0.00 0.00 10611.73 5570.56 23592.96 00:13:28.644 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:28.644 nvme2n3 : 1.03 11932.01 46.61 0.00 0.00 10616.36 5570.56 23290.49 00:13:28.644 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:28.644 nvme3n1 : 1.03 11918.07 46.55 0.00 0.00 10621.22 5494.94 22786.36 00:13:28.644 =================================================================================================================== 00:13:28.644 Total : 73391.36 286.68 0.00 0.00 10390.51 1865.26 26416.05 00:13:28.904 00:13:28.904 real 0m1.762s 00:13:28.904 user 0m1.085s 00:13:28.904 sys 0m0.516s 00:13:28.904 06:05:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.904 ************************************ 00:13:28.904 END TEST bdev_write_zeroes 00:13:28.904 06:05:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:28.904 ************************************ 00:13:28.904 06:05:54 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:28.904 06:05:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:28.904 06:05:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.904 06:05:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.904 ************************************ 00:13:28.904 START TEST bdev_json_nonenclosed 00:13:28.904 ************************************ 00:13:28.904 06:05:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:28.904 [2024-10-01 06:05:54.434052] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:28.904 [2024-10-01 06:05:54.434163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82078 ] 00:13:29.166 [2024-10-01 06:05:54.570051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.166 [2024-10-01 06:05:54.612377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.166 [2024-10-01 06:05:54.612475] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:29.166 [2024-10-01 06:05:54.612491] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:29.166 [2024-10-01 06:05:54.612506] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:29.166 ************************************ 00:13:29.166 END TEST bdev_json_nonenclosed 00:13:29.166 ************************************ 00:13:29.166 00:13:29.166 real 0m0.333s 00:13:29.166 user 0m0.138s 00:13:29.166 sys 0m0.091s 00:13:29.166 06:05:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.166 06:05:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:29.166 06:05:54 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.166 06:05:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:29.166 06:05:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.166 06:05:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.166 ************************************ 00:13:29.166 START TEST bdev_json_nonarray 00:13:29.166 ************************************ 00:13:29.166 06:05:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.428 [2024-10-01 06:05:54.829592] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:29.428 [2024-10-01 06:05:54.829703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82109 ] 00:13:29.428 [2024-10-01 06:05:54.965980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.428 [2024-10-01 06:05:55.007118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.428 [2024-10-01 06:05:55.007387] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:29.428 [2024-10-01 06:05:55.007410] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:29.428 [2024-10-01 06:05:55.007422] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:29.689 00:13:29.689 real 0m0.328s 00:13:29.689 user 0m0.133s 00:13:29.690 sys 0m0.091s 00:13:29.690 06:05:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.690 ************************************ 00:13:29.690 END TEST bdev_json_nonarray 00:13:29.690 ************************************ 00:13:29.690 06:05:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:29.690 06:05:55 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:30.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:31.649 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:31.911 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.173 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.173 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.173 00:13:32.173 real 0m50.842s 00:13:32.173 user 1m18.435s 00:13:32.173 sys 0m29.786s 00:13:32.173 06:05:57 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.173 06:05:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.173 ************************************ 00:13:32.173 END TEST blockdev_xnvme 00:13:32.173 ************************************ 00:13:32.173 06:05:57 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:32.173 06:05:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:32.173 06:05:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.173 06:05:57 -- common/autotest_common.sh@10 -- # set +x 00:13:32.435 ************************************ 00:13:32.435 START TEST ublk 00:13:32.435 ************************************ 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:32.435 * Looking for test storage... 00:13:32.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:32.435 06:05:57 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.435 06:05:57 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.435 06:05:57 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.435 06:05:57 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.435 06:05:57 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.435 06:05:57 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.435 06:05:57 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.435 06:05:57 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.435 06:05:57 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.435 06:05:57 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.435 06:05:57 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.435 06:05:57 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:32.435 06:05:57 ublk -- scripts/common.sh@345 -- # : 1 00:13:32.435 06:05:57 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.435 06:05:57 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.435 06:05:57 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:32.435 06:05:57 ublk -- scripts/common.sh@353 -- # local d=1 00:13:32.435 06:05:57 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.435 06:05:57 ublk -- scripts/common.sh@355 -- # echo 1 00:13:32.435 06:05:57 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.435 06:05:57 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:32.435 06:05:57 ublk -- scripts/common.sh@353 -- # local d=2 00:13:32.435 06:05:57 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.435 06:05:57 ublk -- scripts/common.sh@355 -- # echo 2 00:13:32.435 06:05:57 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.435 06:05:57 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.435 06:05:57 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.435 06:05:57 ublk -- scripts/common.sh@368 -- # return 0 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:32.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.435 --rc genhtml_branch_coverage=1 00:13:32.435 --rc genhtml_function_coverage=1 00:13:32.435 --rc genhtml_legend=1 00:13:32.435 --rc geninfo_all_blocks=1 00:13:32.435 --rc geninfo_unexecuted_blocks=1 00:13:32.435 00:13:32.435 ' 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:32.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.435 --rc genhtml_branch_coverage=1 00:13:32.435 --rc genhtml_function_coverage=1 00:13:32.435 --rc genhtml_legend=1 00:13:32.435 --rc geninfo_all_blocks=1 00:13:32.435 --rc geninfo_unexecuted_blocks=1 00:13:32.435 00:13:32.435 ' 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:32.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.435 --rc genhtml_branch_coverage=1 00:13:32.435 --rc genhtml_function_coverage=1 00:13:32.435 --rc genhtml_legend=1 00:13:32.435 --rc geninfo_all_blocks=1 00:13:32.435 --rc geninfo_unexecuted_blocks=1 00:13:32.435 00:13:32.435 ' 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:32.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.435 --rc genhtml_branch_coverage=1 00:13:32.435 --rc genhtml_function_coverage=1 00:13:32.435 --rc genhtml_legend=1 00:13:32.435 --rc geninfo_all_blocks=1 00:13:32.435 --rc geninfo_unexecuted_blocks=1 00:13:32.435 00:13:32.435 ' 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:32.435 06:05:57 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:32.435 06:05:57 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:32.435 06:05:57 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:32.435 06:05:57 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:32.435 06:05:57 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:32.435 06:05:57 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:32.435 06:05:57 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:32.435 06:05:57 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:32.435 06:05:57 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.435 06:05:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:32.435 ************************************ 00:13:32.435 START TEST test_save_ublk_config 00:13:32.435 ************************************ 00:13:32.435 06:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:32.435 06:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:32.435 06:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82389 00:13:32.435 06:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:32.435 06:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82389 00:13:32.435 06:05:57 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:32.435 06:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82389 ']' 00:13:32.435 06:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.436 06:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.436 06:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.436 06:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.436 06:05:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:32.697 [2024-10-01 06:05:58.061080] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:32.697 [2024-10-01 06:05:58.061247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82389 ] 00:13:32.697 [2024-10-01 06:05:58.198317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.697 [2024-10-01 06:05:58.280243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.641 06:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.641 06:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:33.641 06:05:58 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:33.641 06:05:58 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:33.641 06:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.641 06:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:33.641 [2024-10-01 06:05:58.928876] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:33.641 [2024-10-01 06:05:58.929282] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:33.641 malloc0 00:13:33.641 [2024-10-01 06:05:58.969001] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:33.641 [2024-10-01 06:05:58.969101] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:33.641 [2024-10-01 06:05:58.969110] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:33.641 [2024-10-01 06:05:58.969125] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:33.641 [2024-10-01 06:05:58.977998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:33.641 [2024-10-01 06:05:58.978037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:33.641 [2024-10-01 06:05:58.984891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:33.641 [2024-10-01 06:05:58.985030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:33.641 [2024-10-01 06:05:59.001878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:33.641 0 00:13:33.641 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.641 06:05:59 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:33.641 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.641 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:33.901 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.901 06:05:59 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:33.901 "subsystems": [ 00:13:33.901 { 00:13:33.901 "subsystem": "fsdev", 00:13:33.901 "config": [ 00:13:33.901 { 00:13:33.901 "method": "fsdev_set_opts", 00:13:33.901 "params": { 00:13:33.901 "fsdev_io_pool_size": 65535, 00:13:33.901 "fsdev_io_cache_size": 256 00:13:33.901 } 00:13:33.901 } 00:13:33.901 ] 00:13:33.901 }, 00:13:33.901 { 00:13:33.901 "subsystem": "keyring", 00:13:33.901 "config": [] 00:13:33.901 }, 00:13:33.901 { 00:13:33.901 "subsystem": "iobuf", 00:13:33.901 "config": [ 00:13:33.901 { 00:13:33.901 "method": "iobuf_set_options", 00:13:33.901 "params": { 00:13:33.901 "small_pool_count": 8192, 00:13:33.901 "large_pool_count": 1024, 00:13:33.901 "small_bufsize": 8192, 00:13:33.901 "large_bufsize": 135168 00:13:33.901 } 00:13:33.901 } 00:13:33.901 ] 00:13:33.901 }, 00:13:33.902 { 00:13:33.902 "subsystem": "sock", 00:13:33.902 "config": [ 00:13:33.902 { 00:13:33.902 "method": "sock_set_default_impl", 00:13:33.902 "params": { 00:13:33.902 "impl_name": "posix" 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "sock_impl_set_options", 00:13:33.902 "params": { 00:13:33.902 "impl_name": "ssl", 00:13:33.902 "recv_buf_size": 4096, 00:13:33.902 "send_buf_size": 4096, 00:13:33.902 "enable_recv_pipe": true, 00:13:33.902 "enable_quickack": false, 00:13:33.902 "enable_placement_id": 0, 00:13:33.902 "enable_zerocopy_send_server": true, 00:13:33.902 "enable_zerocopy_send_client": false, 00:13:33.902 "zerocopy_threshold": 0, 00:13:33.902 "tls_version": 0, 00:13:33.902 "enable_ktls": false 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "sock_impl_set_options", 00:13:33.902 "params": { 00:13:33.902 "impl_name": "posix", 00:13:33.902 "recv_buf_size": 2097152, 00:13:33.902 "send_buf_size": 2097152, 00:13:33.902 "enable_recv_pipe": true, 00:13:33.902 "enable_quickack": false, 00:13:33.902 "enable_placement_id": 0, 00:13:33.902 "enable_zerocopy_send_server": true, 00:13:33.902 "enable_zerocopy_send_client": false, 00:13:33.902 "zerocopy_threshold": 0, 00:13:33.902 "tls_version": 0, 00:13:33.902 "enable_ktls": false 00:13:33.902 } 00:13:33.902 } 00:13:33.902 ] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "vmd", 00:13:33.902 "config": [] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "accel", 00:13:33.902 "config": [ 00:13:33.902 { 00:13:33.902 "method": "accel_set_options", 00:13:33.902 "params": { 00:13:33.902 "small_cache_size": 128, 00:13:33.902 "large_cache_size": 16, 00:13:33.902 "task_count": 2048, 00:13:33.902 "sequence_count": 2048, 00:13:33.902 "buf_count": 2048 00:13:33.902 } 00:13:33.902 } 00:13:33.902 ] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "bdev", 00:13:33.902 "config": [ 00:13:33.902 { 00:13:33.902 "method": "bdev_set_options", 00:13:33.902 "params": { 00:13:33.902 "bdev_io_pool_size": 65535, 00:13:33.902 "bdev_io_cache_size": 256, 00:13:33.902 "bdev_auto_examine": true, 00:13:33.902 "iobuf_small_cache_size": 128, 00:13:33.902 "iobuf_large_cache_size": 16 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "bdev_raid_set_options", 00:13:33.902 "params": { 00:13:33.902 "process_window_size_kb": 1024, 00:13:33.902 "process_max_bandwidth_mb_sec": 0 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "bdev_iscsi_set_options", 00:13:33.902 "params": { 00:13:33.902 "timeout_sec": 30 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "bdev_nvme_set_options", 00:13:33.902 "params": { 00:13:33.902 "action_on_timeout": "none", 00:13:33.902 "timeout_us": 0, 00:13:33.902 "timeout_admin_us": 0, 00:13:33.902 "keep_alive_timeout_ms": 10000, 00:13:33.902 "arbitration_burst": 0, 00:13:33.902 "low_priority_weight": 0, 00:13:33.902 "medium_priority_weight": 0, 00:13:33.902 "high_priority_weight": 0, 00:13:33.902 "nvme_adminq_poll_period_us": 10000, 00:13:33.902 "nvme_ioq_poll_period_us": 0, 00:13:33.902 "io_queue_requests": 0, 00:13:33.902 "delay_cmd_submit": true, 00:13:33.902 "transport_retry_count": 4, 00:13:33.902 "bdev_retry_count": 3, 00:13:33.902 "transport_ack_timeout": 0, 00:13:33.902 "ctrlr_loss_timeout_sec": 0, 00:13:33.902 "reconnect_delay_sec": 0, 00:13:33.902 "fast_io_fail_timeout_sec": 0, 00:13:33.902 "disable_auto_failback": false, 00:13:33.902 "generate_uuids": false, 00:13:33.902 "transport_tos": 0, 00:13:33.902 "nvme_error_stat": false, 00:13:33.902 "rdma_srq_size": 0, 00:13:33.902 "io_path_stat": false, 00:13:33.902 "allow_accel_sequence": false, 00:13:33.902 "rdma_max_cq_size": 0, 00:13:33.902 "rdma_cm_event_timeout_ms": 0, 00:13:33.902 "dhchap_digests": [ 00:13:33.902 "sha256", 00:13:33.902 "sha384", 00:13:33.902 "sha512" 00:13:33.902 ], 00:13:33.902 "dhchap_dhgroups": [ 00:13:33.902 "null", 00:13:33.902 "ffdhe2048", 00:13:33.902 "ffdhe3072", 00:13:33.902 "ffdhe4096", 00:13:33.902 "ffdhe6144", 00:13:33.902 "ffdhe8192" 00:13:33.902 ] 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "bdev_nvme_set_hotplug", 00:13:33.902 "params": { 00:13:33.902 "period_us": 100000, 00:13:33.902 "enable": false 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "bdev_malloc_create", 00:13:33.902 "params": { 00:13:33.902 "name": "malloc0", 00:13:33.902 "num_blocks": 8192, 00:13:33.902 "block_size": 4096, 00:13:33.902 "physical_block_size": 4096, 00:13:33.902 "uuid": "107a0ebc-90ec-45ce-b5d2-b3092814c32d", 00:13:33.902 "optimal_io_boundary": 0, 00:13:33.902 "md_size": 0, 00:13:33.902 "dif_type": 0, 00:13:33.902 "dif_is_head_of_md": false, 00:13:33.902 "dif_pi_format": 0 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "bdev_wait_for_examine" 00:13:33.902 } 00:13:33.902 ] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "scsi", 00:13:33.902 "config": null 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "scheduler", 00:13:33.902 "config": [ 00:13:33.902 { 00:13:33.902 "method": "framework_set_scheduler", 00:13:33.902 "params": { 00:13:33.902 "name": "static" 00:13:33.902 } 00:13:33.902 } 00:13:33.902 ] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "vhost_scsi", 00:13:33.902 "config": [] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "vhost_blk", 00:13:33.902 "config": [] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "ublk", 00:13:33.902 "config": [ 00:13:33.902 { 00:13:33.902 "method": "ublk_create_target", 00:13:33.902 "params": { 00:13:33.902 "cpumask": "1" 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "ublk_start_disk", 00:13:33.902 "params": { 00:13:33.902 "bdev_name": "malloc0", 00:13:33.902 "ublk_id": 0, 00:13:33.902 "num_queues": 1, 00:13:33.902 "queue_depth": 128 00:13:33.902 } 00:13:33.902 } 00:13:33.902 ] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "nbd", 00:13:33.902 "config": [] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "nvmf", 00:13:33.902 "config": [ 00:13:33.902 { 00:13:33.902 "method": "nvmf_set_config", 00:13:33.902 "params": { 00:13:33.902 "discovery_filter": "match_any", 00:13:33.902 "admin_cmd_passthru": { 00:13:33.902 "identify_ctrlr": false 00:13:33.902 }, 00:13:33.902 "dhchap_digests": [ 00:13:33.902 "sha256", 00:13:33.902 "sha384", 00:13:33.902 "sha512" 00:13:33.902 ], 00:13:33.902 "dhchap_dhgroups": [ 00:13:33.902 "null", 00:13:33.902 "ffdhe2048", 00:13:33.902 "ffdhe3072", 00:13:33.902 "ffdhe4096", 00:13:33.902 "ffdhe6144", 00:13:33.902 "ffdhe8192" 00:13:33.902 ] 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "nvmf_set_max_subsystems", 00:13:33.902 "params": { 00:13:33.902 "max_subsystems": 1024 00:13:33.902 } 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "method": "nvmf_set_crdt", 00:13:33.902 "params": { 00:13:33.902 "crdt1": 0, 00:13:33.902 "crdt2": 0, 00:13:33.902 "crdt3": 0 00:13:33.902 } 00:13:33.902 } 00:13:33.902 ] 00:13:33.902 }, 00:13:33.902 { 00:13:33.902 "subsystem": "iscsi", 00:13:33.902 "config": [ 00:13:33.902 { 00:13:33.902 "method": "iscsi_set_options", 00:13:33.902 "params": { 00:13:33.902 "node_base": "iqn.2016-06.io.spdk", 00:13:33.902 "max_sessions": 128, 00:13:33.902 "max_connections_per_session": 2, 00:13:33.902 "max_queue_depth": 64, 00:13:33.902 "default_time2wait": 2, 00:13:33.902 "default_time2retain": 20, 00:13:33.902 "first_burst_length": 8192, 00:13:33.902 "immediate_data": true, 00:13:33.902 "allow_duplicated_isid": false, 00:13:33.902 "error_recovery_level": 0, 00:13:33.902 "nop_timeout": 60, 00:13:33.902 "nop_in_interval": 30, 00:13:33.902 "disable_chap": false, 00:13:33.902 "require_chap": false, 00:13:33.902 "mutual_chap": false, 00:13:33.902 "chap_group": 0, 00:13:33.902 "max_large_datain_per_connection": 64, 00:13:33.902 "max_r2t_per_connection": 4, 00:13:33.902 "pdu_pool_size": 36864, 00:13:33.902 "immediate_data_pool_size": 16384, 00:13:33.902 "data_out_pool_size": 2048 00:13:33.902 } 00:13:33.902 } 00:13:33.902 ] 00:13:33.902 } 00:13:33.902 ] 00:13:33.902 }' 00:13:33.902 06:05:59 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82389 00:13:33.902 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82389 ']' 00:13:33.902 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82389 00:13:33.902 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:33.902 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.903 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82389 00:13:33.903 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.903 killing process with pid 82389 00:13:33.903 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.903 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82389' 00:13:33.903 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82389 00:13:33.903 06:05:59 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82389 00:13:34.163 [2024-10-01 06:05:59.618587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:34.163 [2024-10-01 06:05:59.656891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:34.163 [2024-10-01 06:05:59.657016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:34.163 [2024-10-01 06:05:59.664873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:34.163 [2024-10-01 06:05:59.664931] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:34.163 [2024-10-01 06:05:59.664938] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:34.163 [2024-10-01 06:05:59.664970] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:34.163 [2024-10-01 06:05:59.665106] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82431 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82431 00:13:34.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82431 ']' 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:34.736 06:06:00 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:34.736 "subsystems": [ 00:13:34.736 { 00:13:34.736 "subsystem": "fsdev", 00:13:34.736 "config": [ 00:13:34.736 { 00:13:34.736 "method": "fsdev_set_opts", 00:13:34.736 "params": { 00:13:34.736 "fsdev_io_pool_size": 65535, 00:13:34.736 "fsdev_io_cache_size": 256 00:13:34.736 } 00:13:34.736 } 00:13:34.736 ] 00:13:34.736 }, 00:13:34.736 { 00:13:34.736 "subsystem": "keyring", 00:13:34.736 "config": [] 00:13:34.736 }, 00:13:34.736 { 00:13:34.736 "subsystem": "iobuf", 00:13:34.736 "config": [ 00:13:34.736 { 00:13:34.736 "method": "iobuf_set_options", 00:13:34.736 "params": { 00:13:34.736 "small_pool_count": 8192, 00:13:34.736 "large_pool_count": 1024, 00:13:34.736 "small_bufsize": 8192, 00:13:34.736 "large_bufsize": 135168 00:13:34.736 } 00:13:34.736 } 00:13:34.736 ] 00:13:34.736 }, 00:13:34.736 { 00:13:34.736 "subsystem": "sock", 00:13:34.736 "config": [ 00:13:34.736 { 00:13:34.736 "method": "sock_set_default_impl", 00:13:34.736 "params": { 00:13:34.736 "impl_name": "posix" 00:13:34.736 } 00:13:34.736 }, 00:13:34.736 { 00:13:34.736 "method": "sock_impl_set_options", 00:13:34.736 "params": { 00:13:34.736 "impl_name": "ssl", 00:13:34.736 "recv_buf_size": 4096, 00:13:34.736 "send_buf_size": 4096, 00:13:34.736 "enable_recv_pipe": true, 00:13:34.736 "enable_quickack": false, 00:13:34.736 "enable_placement_id": 0, 00:13:34.736 "enable_zerocopy_send_server": true, 00:13:34.736 "enable_zerocopy_send_client": false, 00:13:34.736 "zerocopy_threshold": 0, 00:13:34.736 "tls_version": 0, 00:13:34.736 "enable_ktls": false 00:13:34.736 } 00:13:34.736 }, 00:13:34.736 { 00:13:34.736 "method": "sock_impl_set_options", 00:13:34.736 "params": { 00:13:34.736 "impl_name": "posix", 00:13:34.736 "recv_buf_size": 2097152, 00:13:34.736 "send_buf_size": 2097152, 00:13:34.736 "enable_recv_pipe": true, 00:13:34.736 "enable_quickack": false, 00:13:34.736 "enable_placement_id": 0, 00:13:34.736 "enable_zerocopy_send_server": true, 00:13:34.736 "enable_zerocopy_send_client": false, 00:13:34.736 "zerocopy_threshold": 0, 00:13:34.736 "tls_version": 0, 00:13:34.737 "enable_ktls": false 00:13:34.737 } 00:13:34.737 } 00:13:34.737 ] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "vmd", 00:13:34.737 "config": [] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "accel", 00:13:34.737 "config": [ 00:13:34.737 { 00:13:34.737 "method": "accel_set_options", 00:13:34.737 "params": { 00:13:34.737 "small_cache_size": 128, 00:13:34.737 "large_cache_size": 16, 00:13:34.737 "task_count": 2048, 00:13:34.737 "sequence_count": 2048, 00:13:34.737 "buf_count": 2048 00:13:34.737 } 00:13:34.737 } 00:13:34.737 ] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "bdev", 00:13:34.737 "config": [ 00:13:34.737 { 00:13:34.737 "method": "bdev_set_options", 00:13:34.737 "params": { 00:13:34.737 "bdev_io_pool_size": 65535, 00:13:34.737 "bdev_io_cache_size": 256, 00:13:34.737 "bdev_auto_examine": true, 00:13:34.737 "iobuf_small_cache_size": 128, 00:13:34.737 "iobuf_large_cache_size": 16 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "bdev_raid_set_options", 00:13:34.737 "params": { 00:13:34.737 "process_window_size_kb": 1024, 00:13:34.737 "process_max_bandwidth_mb_sec": 0 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "bdev_iscsi_set_options", 00:13:34.737 "params": { 00:13:34.737 "timeout_sec": 30 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "bdev_nvme_set_options", 00:13:34.737 "params": { 00:13:34.737 "action_on_timeout": "none", 00:13:34.737 "timeout_us": 0, 00:13:34.737 "timeout_admin_us": 0, 00:13:34.737 "keep_alive_timeout_ms": 10000, 00:13:34.737 "arbitration_burst": 0, 00:13:34.737 "low_priority_weight": 0, 00:13:34.737 "medium_priority_weight": 0, 00:13:34.737 "high_priority_weight": 0, 00:13:34.737 "nvme_adminq_poll_period_us": 10000, 00:13:34.737 "nvme_ioq_poll_period_us": 0, 00:13:34.737 "io_queue_requests": 0, 00:13:34.737 "delay_cmd_submit": true, 00:13:34.737 "transport_retry_count": 4, 00:13:34.737 "bdev_retry_count": 3, 00:13:34.737 "transport_ack_timeout": 0, 00:13:34.737 "ctrlr_loss_timeout_sec": 0, 00:13:34.737 "reconnect_delay_sec": 0, 00:13:34.737 "fast_io_fail_timeout_sec": 0, 00:13:34.737 "disable_auto_failback": false, 00:13:34.737 "generate_uuids": false, 00:13:34.737 "transport_tos": 0, 00:13:34.737 "nvme_error_stat": false, 00:13:34.737 "rdma_srq_size": 0, 00:13:34.737 "io_path_stat": false, 00:13:34.737 "allow_accel_sequence": false, 00:13:34.737 "rdma_max_cq_size": 0, 00:13:34.737 "rdma_cm_event_timeout_ms": 0, 00:13:34.737 "dhchap_digests": [ 00:13:34.737 "sha256", 00:13:34.737 "sha384", 00:13:34.737 "sha512" 00:13:34.737 ], 00:13:34.737 "dhchap_dhgroups": [ 00:13:34.737 "null", 00:13:34.737 "ffdhe2048", 00:13:34.737 "ffdhe3072", 00:13:34.737 "ffdhe4096", 00:13:34.737 "ffdhe6144", 00:13:34.737 "ffdhe8192" 00:13:34.737 ] 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "bdev_nvme_set_hotplug", 00:13:34.737 "params": { 00:13:34.737 "period_us": 100000, 00:13:34.737 "enable": false 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "bdev_malloc_create", 00:13:34.737 "params": { 00:13:34.737 "name": "malloc0", 00:13:34.737 "num_blocks": 8192, 00:13:34.737 "block_size": 4096, 00:13:34.737 "physical_block_size": 4096, 00:13:34.737 "uuid": "107a0ebc-90ec-45ce-b5d2-b3092814c32d", 00:13:34.737 "optimal_io_boundary": 0, 00:13:34.737 "md_size": 0, 00:13:34.737 "dif_type": 0, 00:13:34.737 "dif_is_head_of_md": false, 00:13:34.737 "dif_pi_format": 0 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "bdev_wait_for_examine" 00:13:34.737 } 00:13:34.737 ] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "scsi", 00:13:34.737 "config": null 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "scheduler", 00:13:34.737 "config": [ 00:13:34.737 { 00:13:34.737 "method": "framework_set_scheduler", 00:13:34.737 "params": { 00:13:34.737 "name": "static" 00:13:34.737 } 00:13:34.737 } 00:13:34.737 ] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "vhost_scsi", 00:13:34.737 "config": [] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "vhost_blk", 00:13:34.737 "config": [] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "ublk", 00:13:34.737 "config": [ 00:13:34.737 { 00:13:34.737 "method": "ublk_create_target", 00:13:34.737 "params": { 00:13:34.737 "cpumask": "1" 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "ublk_start_disk", 00:13:34.737 "params": { 00:13:34.737 "bdev_name": "malloc0", 00:13:34.737 "ublk_id": 0, 00:13:34.737 "num_queues": 1, 00:13:34.737 "queue_depth": 128 00:13:34.737 } 00:13:34.737 } 00:13:34.737 ] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "nbd", 00:13:34.737 "config": [] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "nvmf", 00:13:34.737 "config": [ 00:13:34.737 { 00:13:34.737 "method": "nvmf_set_config", 00:13:34.737 "params": { 00:13:34.737 "discovery_filter": "match_any", 00:13:34.737 "admin_cmd_passthru": { 00:13:34.737 "identify_ctrlr": false 00:13:34.737 }, 00:13:34.737 "dhchap_digests": [ 00:13:34.737 "sha256", 00:13:34.737 "sha384", 00:13:34.737 "sha512" 00:13:34.737 ], 00:13:34.737 "dhchap_dhgroups": [ 00:13:34.737 "null", 00:13:34.737 "ffdhe2048", 00:13:34.737 "ffdhe3072", 00:13:34.737 "ffdhe4096", 00:13:34.737 "ffdhe6144", 00:13:34.737 "ffdhe8192" 00:13:34.737 ] 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "nvmf_set_max_subsystems", 00:13:34.737 "params": { 00:13:34.737 "max_subsystems": 1024 00:13:34.737 } 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "method": "nvmf_set_crdt", 00:13:34.737 "params": { 00:13:34.737 "crdt1": 0, 00:13:34.737 "crdt2": 0, 00:13:34.737 "crdt3": 0 00:13:34.737 } 00:13:34.737 } 00:13:34.737 ] 00:13:34.737 }, 00:13:34.737 { 00:13:34.737 "subsystem": "iscsi", 00:13:34.737 "config": [ 00:13:34.737 { 00:13:34.737 "method": "iscsi_set_options", 00:13:34.737 "params": { 00:13:34.737 "node_base": "iqn.2016-06.io.spdk", 00:13:34.737 "max_sessions": 128, 00:13:34.737 "max_connections_per_session": 2, 00:13:34.737 "max_queue_depth": 64, 00:13:34.737 "default_time2wait": 2, 00:13:34.737 "default_time2retain": 20, 00:13:34.737 "first_burst_length": 8192, 00:13:34.737 "immediate_data": true, 00:13:34.737 "allow_duplicated_isid": false, 00:13:34.737 "error_recovery_level": 0, 00:13:34.737 "nop_timeout": 60, 00:13:34.737 "nop_in_interval": 30, 00:13:34.737 "disable_chap": false, 00:13:34.737 "require_chap": false, 00:13:34.737 "mutual_chap": false, 00:13:34.737 "chap_group": 0, 00:13:34.737 "max_large_datain_per_connection": 64, 00:13:34.737 "max_r2t_per_connection": 4, 00:13:34.737 "pdu_pool_size": 36864, 00:13:34.737 "immediate_data_pool_size": 16384, 00:13:34.737 "data_out_pool_size": 2048 00:13:34.737 } 00:13:34.737 } 00:13:34.737 ] 00:13:34.737 } 00:13:34.737 ] 00:13:34.737 }' 00:13:34.737 [2024-10-01 06:06:00.205072] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:34.737 [2024-10-01 06:06:00.205246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82431 ] 00:13:34.737 [2024-10-01 06:06:00.338867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.003 [2024-10-01 06:06:00.428094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.311 [2024-10-01 06:06:00.815867] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:35.311 [2024-10-01 06:06:00.816256] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:35.311 [2024-10-01 06:06:00.824000] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:35.311 [2024-10-01 06:06:00.824084] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:35.311 [2024-10-01 06:06:00.824093] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:35.311 [2024-10-01 06:06:00.824102] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:35.311 [2024-10-01 06:06:00.832989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:35.311 [2024-10-01 06:06:00.833018] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:35.311 [2024-10-01 06:06:00.839883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:35.311 [2024-10-01 06:06:00.839993] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:35.311 [2024-10-01 06:06:00.856872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82431 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82431 ']' 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82431 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82431 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:35.573 killing process with pid 82431 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82431' 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82431 00:13:35.573 06:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82431 00:13:36.146 [2024-10-01 06:06:01.543237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:36.146 [2024-10-01 06:06:01.583989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:36.146 [2024-10-01 06:06:01.584167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:36.146 [2024-10-01 06:06:01.590892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:36.146 [2024-10-01 06:06:01.590964] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:36.146 [2024-10-01 06:06:01.590975] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:36.146 [2024-10-01 06:06:01.591017] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:36.146 [2024-10-01 06:06:01.591180] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:36.719 06:06:02 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:36.719 00:13:36.719 real 0m4.293s 00:13:36.719 user 0m2.750s 00:13:36.719 sys 0m2.228s 00:13:36.719 06:06:02 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.719 ************************************ 00:13:36.719 END TEST test_save_ublk_config 00:13:36.719 ************************************ 00:13:36.719 06:06:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:36.719 06:06:02 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82487 00:13:36.719 06:06:02 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.719 06:06:02 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82487 00:13:36.719 06:06:02 ublk -- common/autotest_common.sh@831 -- # '[' -z 82487 ']' 00:13:36.719 06:06:02 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.719 06:06:02 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.719 06:06:02 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.719 06:06:02 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:36.719 06:06:02 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.719 06:06:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:36.981 [2024-10-01 06:06:02.405601] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:36.981 [2024-10-01 06:06:02.405754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82487 ] 00:13:36.981 [2024-10-01 06:06:02.544530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.242 [2024-10-01 06:06:02.618450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.242 [2024-10-01 06:06:02.618556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.816 06:06:03 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.816 06:06:03 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:37.816 06:06:03 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:37.816 06:06:03 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:37.816 06:06:03 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.816 06:06:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:37.816 ************************************ 00:13:37.816 START TEST test_create_ublk 00:13:37.816 ************************************ 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:37.816 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:37.816 [2024-10-01 06:06:03.277877] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:37.816 [2024-10-01 06:06:03.280128] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.816 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:37.816 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.816 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:37.816 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.816 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:37.816 [2024-10-01 06:06:03.403065] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:37.816 [2024-10-01 06:06:03.403608] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:37.816 [2024-10-01 06:06:03.403628] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:37.816 [2024-10-01 06:06:03.403640] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:37.816 [2024-10-01 06:06:03.411341] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:37.816 [2024-10-01 06:06:03.411390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:37.816 [2024-10-01 06:06:03.418895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:37.816 [2024-10-01 06:06:03.419680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:38.077 [2024-10-01 06:06:03.450892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:38.077 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:38.077 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.077 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:38.077 06:06:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:38.077 { 00:13:38.077 "ublk_device": "/dev/ublkb0", 00:13:38.077 "id": 0, 00:13:38.077 "queue_depth": 512, 00:13:38.077 "num_queues": 4, 00:13:38.077 "bdev_name": "Malloc0" 00:13:38.077 } 00:13:38.077 ]' 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:38.077 06:06:03 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:38.077 06:06:03 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:38.338 fio: verification read phase will never start because write phase uses all of runtime 00:13:38.338 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:38.338 fio-3.35 00:13:38.338 Starting 1 process 00:13:48.341 00:13:48.341 fio_test: (groupid=0, jobs=1): err= 0: pid=82536: Tue Oct 1 06:06:13 2024 00:13:48.341 write: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(657MiB/10001msec); 0 zone resets 00:13:48.341 clat (usec): min=37, max=4146, avg=58.55, stdev=91.83 00:13:48.341 lat (usec): min=38, max=4147, avg=59.04, stdev=91.87 00:13:48.341 clat percentiles (usec): 00:13:48.341 | 1.00th=[ 43], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 48], 00:13:48.341 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 53], 00:13:48.341 | 70.00th=[ 55], 80.00th=[ 59], 90.00th=[ 70], 95.00th=[ 81], 00:13:48.341 | 99.00th=[ 104], 99.50th=[ 192], 99.90th=[ 1827], 99.95th=[ 2573], 00:13:48.341 | 99.99th=[ 3556] 00:13:48.341 bw ( KiB/s): min=43408, max=75304, per=99.80%, avg=67183.16, stdev=8738.31, samples=19 00:13:48.341 iops : min=10852, max=18826, avg=16795.79, stdev=2184.58, samples=19 00:13:48.341 lat (usec) : 50=41.58%, 100=57.24%, 250=0.89%, 500=0.11%, 750=0.01% 00:13:48.341 lat (usec) : 1000=0.01% 00:13:48.341 lat (msec) : 2=0.05%, 4=0.09%, 10=0.01% 00:13:48.341 cpu : usr=2.92%, sys=17.31%, ctx=168342, majf=0, minf=796 00:13:48.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.341 issued rwts: total=0,168313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.341 00:13:48.341 Run status group 0 (all jobs): 00:13:48.341 WRITE: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=657MiB (689MB), run=10001-10001msec 00:13:48.341 00:13:48.341 Disk stats (read/write): 00:13:48.341 ublkb0: ios=0/166452, merge=0/0, ticks=0/7570, in_queue=7570, util=99.09% 00:13:48.341 06:06:13 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.341 [2024-10-01 06:06:13.894030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:48.341 [2024-10-01 06:06:13.931312] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:48.341 [2024-10-01 06:06:13.932174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:48.341 [2024-10-01 06:06:13.936868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:48.341 [2024-10-01 06:06:13.937085] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:48.341 [2024-10-01 06:06:13.937099] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.341 06:06:13 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.341 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 [2024-10-01 06:06:13.960921] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:48.602 request: 00:13:48.602 { 00:13:48.602 "ublk_id": 0, 00:13:48.602 "method": "ublk_stop_disk", 00:13:48.602 "req_id": 1 00:13:48.602 } 00:13:48.602 Got JSON-RPC error response 00:13:48.602 response: 00:13:48.602 { 00:13:48.602 "code": -19, 00:13:48.602 "message": "No such device" 00:13:48.602 } 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.602 06:06:13 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 [2024-10-01 06:06:13.976928] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:48.602 [2024-10-01 06:06:13.978328] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:48.602 [2024-10-01 06:06:13.978365] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.602 06:06:13 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.602 06:06:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.602 06:06:14 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:48.602 06:06:14 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.602 06:06:14 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:48.602 06:06:14 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:48.602 06:06:14 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:48.602 06:06:14 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.602 06:06:14 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:48.602 06:06:14 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:48.602 ************************************ 00:13:48.602 END TEST test_create_ublk 00:13:48.602 ************************************ 00:13:48.602 06:06:14 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:48.602 00:13:48.602 real 0m10.866s 00:13:48.602 user 0m0.609s 00:13:48.602 sys 0m1.813s 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.602 06:06:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 06:06:14 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:48.602 06:06:14 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:48.602 06:06:14 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.602 06:06:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 ************************************ 00:13:48.602 START TEST test_create_multi_ublk 00:13:48.602 ************************************ 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 [2024-10-01 06:06:14.196864] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:48.602 [2024-10-01 06:06:14.197811] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.602 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.864 [2024-10-01 06:06:14.280974] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:48.864 [2024-10-01 06:06:14.281273] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:48.864 [2024-10-01 06:06:14.281304] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:48.864 [2024-10-01 06:06:14.281309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:48.864 [2024-10-01 06:06:14.300867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:48.864 [2024-10-01 06:06:14.300884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:48.864 [2024-10-01 06:06:14.312876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:48.864 [2024-10-01 06:06:14.313375] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:48.864 [2024-10-01 06:06:14.325006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.864 [2024-10-01 06:06:14.406960] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:48.864 [2024-10-01 06:06:14.407250] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:48.864 [2024-10-01 06:06:14.407261] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:48.864 [2024-10-01 06:06:14.407268] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:48.864 [2024-10-01 06:06:14.418877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:48.864 [2024-10-01 06:06:14.418897] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:48.864 [2024-10-01 06:06:14.430866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:48.864 [2024-10-01 06:06:14.431369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:48.864 [2024-10-01 06:06:14.455870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.864 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.126 [2024-10-01 06:06:14.538955] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:49.126 [2024-10-01 06:06:14.539239] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:49.126 [2024-10-01 06:06:14.539252] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:49.126 [2024-10-01 06:06:14.539257] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.126 [2024-10-01 06:06:14.550882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.126 [2024-10-01 06:06:14.550899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.126 [2024-10-01 06:06:14.562867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.126 [2024-10-01 06:06:14.563371] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:49.126 [2024-10-01 06:06:14.569904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.126 [2024-10-01 06:06:14.650952] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:49.126 [2024-10-01 06:06:14.651245] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:49.126 [2024-10-01 06:06:14.651256] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:49.126 [2024-10-01 06:06:14.651262] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.126 [2024-10-01 06:06:14.662884] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.126 [2024-10-01 06:06:14.662907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.126 [2024-10-01 06:06:14.674866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.126 [2024-10-01 06:06:14.675358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:49.126 [2024-10-01 06:06:14.710872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.126 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:49.387 { 00:13:49.387 "ublk_device": "/dev/ublkb0", 00:13:49.387 "id": 0, 00:13:49.387 "queue_depth": 512, 00:13:49.387 "num_queues": 4, 00:13:49.387 "bdev_name": "Malloc0" 00:13:49.387 }, 00:13:49.387 { 00:13:49.387 "ublk_device": "/dev/ublkb1", 00:13:49.387 "id": 1, 00:13:49.387 "queue_depth": 512, 00:13:49.387 "num_queues": 4, 00:13:49.387 "bdev_name": "Malloc1" 00:13:49.387 }, 00:13:49.387 { 00:13:49.387 "ublk_device": "/dev/ublkb2", 00:13:49.387 "id": 2, 00:13:49.387 "queue_depth": 512, 00:13:49.387 "num_queues": 4, 00:13:49.387 "bdev_name": "Malloc2" 00:13:49.387 }, 00:13:49.387 { 00:13:49.387 "ublk_device": "/dev/ublkb3", 00:13:49.387 "id": 3, 00:13:49.387 "queue_depth": 512, 00:13:49.387 "num_queues": 4, 00:13:49.387 "bdev_name": "Malloc3" 00:13:49.387 } 00:13:49.387 ]' 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:49.387 06:06:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:49.387 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:49.648 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.909 [2024-10-01 06:06:15.369953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:49.909 [2024-10-01 06:06:15.401902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:49.909 [2024-10-01 06:06:15.402556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:49.909 [2024-10-01 06:06:15.409865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:49.909 [2024-10-01 06:06:15.410097] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:49.909 [2024-10-01 06:06:15.410108] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.909 [2024-10-01 06:06:15.423942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:49.909 [2024-10-01 06:06:15.459309] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:49.909 [2024-10-01 06:06:15.460251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:49.909 [2024-10-01 06:06:15.465867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:49.909 [2024-10-01 06:06:15.466096] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:49.909 [2024-10-01 06:06:15.466108] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.909 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.909 [2024-10-01 06:06:15.480949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:49.909 [2024-10-01 06:06:15.516897] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:49.909 [2024-10-01 06:06:15.517503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:49.909 [2024-10-01 06:06:15.524871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:49.909 [2024-10-01 06:06:15.525094] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:49.909 [2024-10-01 06:06:15.525105] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:50.170 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.170 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.170 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:50.170 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.170 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.170 [2024-10-01 06:06:15.540946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.170 [2024-10-01 06:06:15.577889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.170 [2024-10-01 06:06:15.578448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.170 [2024-10-01 06:06:15.581063] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.170 [2024-10-01 06:06:15.581279] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:50.170 [2024-10-01 06:06:15.581291] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:50.170 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.170 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:50.170 [2024-10-01 06:06:15.767926] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:50.170 [2024-10-01 06:06:15.768744] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:50.171 [2024-10-01 06:06:15.768776] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.432 06:06:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.432 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.432 06:06:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:50.432 06:06:16 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:50.432 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.432 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.432 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.432 06:06:16 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:50.432 06:06:16 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:50.693 ************************************ 00:13:50.693 END TEST test_create_multi_ublk 00:13:50.693 ************************************ 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:50.693 00:13:50.693 real 0m1.932s 00:13:50.693 user 0m0.796s 00:13:50.693 sys 0m0.146s 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.693 06:06:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.693 06:06:16 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:50.693 06:06:16 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:50.693 06:06:16 ublk -- ublk/ublk.sh@130 -- # killprocess 82487 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@950 -- # '[' -z 82487 ']' 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@954 -- # kill -0 82487 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@955 -- # uname 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82487 00:13:50.693 killing process with pid 82487 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82487' 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@969 -- # kill 82487 00:13:50.693 06:06:16 ublk -- common/autotest_common.sh@974 -- # wait 82487 00:13:50.955 [2024-10-01 06:06:16.356670] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:50.955 [2024-10-01 06:06:16.356740] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:51.217 00:13:51.217 real 0m18.830s 00:13:51.217 user 0m27.900s 00:13:51.217 sys 0m9.080s 00:13:51.217 06:06:16 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.217 ************************************ 00:13:51.217 END TEST ublk 00:13:51.217 ************************************ 00:13:51.217 06:06:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.217 06:06:16 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:51.217 06:06:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:51.217 06:06:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.217 06:06:16 -- common/autotest_common.sh@10 -- # set +x 00:13:51.217 ************************************ 00:13:51.217 START TEST ublk_recovery 00:13:51.217 ************************************ 00:13:51.217 06:06:16 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:51.217 * Looking for test storage... 00:13:51.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:51.217 06:06:16 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.218 06:06:16 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:51.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.218 --rc genhtml_branch_coverage=1 00:13:51.218 --rc genhtml_function_coverage=1 00:13:51.218 --rc genhtml_legend=1 00:13:51.218 --rc geninfo_all_blocks=1 00:13:51.218 --rc geninfo_unexecuted_blocks=1 00:13:51.218 00:13:51.218 ' 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:51.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.218 --rc genhtml_branch_coverage=1 00:13:51.218 --rc genhtml_function_coverage=1 00:13:51.218 --rc genhtml_legend=1 00:13:51.218 --rc geninfo_all_blocks=1 00:13:51.218 --rc geninfo_unexecuted_blocks=1 00:13:51.218 00:13:51.218 ' 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:51.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.218 --rc genhtml_branch_coverage=1 00:13:51.218 --rc genhtml_function_coverage=1 00:13:51.218 --rc genhtml_legend=1 00:13:51.218 --rc geninfo_all_blocks=1 00:13:51.218 --rc geninfo_unexecuted_blocks=1 00:13:51.218 00:13:51.218 ' 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:51.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.218 --rc genhtml_branch_coverage=1 00:13:51.218 --rc genhtml_function_coverage=1 00:13:51.218 --rc genhtml_legend=1 00:13:51.218 --rc geninfo_all_blocks=1 00:13:51.218 --rc geninfo_unexecuted_blocks=1 00:13:51.218 00:13:51.218 ' 00:13:51.218 06:06:16 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:51.218 06:06:16 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:51.218 06:06:16 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:51.218 06:06:16 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:51.218 06:06:16 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:51.218 06:06:16 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:51.218 06:06:16 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:51.218 06:06:16 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:51.218 06:06:16 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:51.218 06:06:16 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:51.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.218 06:06:16 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82859 00:13:51.218 06:06:16 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:51.218 06:06:16 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82859 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82859 ']' 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.218 06:06:16 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:51.218 06:06:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:51.480 [2024-10-01 06:06:16.887410] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:13:51.480 [2024-10-01 06:06:16.887691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82859 ] 00:13:51.480 [2024-10-01 06:06:17.019774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.480 [2024-10-01 06:06:17.053056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.480 [2024-10-01 06:06:17.053127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:52.423 06:06:17 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.423 [2024-10-01 06:06:17.730870] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:52.423 [2024-10-01 06:06:17.732276] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.423 06:06:17 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.423 malloc0 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.423 06:06:17 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.423 [2024-10-01 06:06:17.779002] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:52.423 [2024-10-01 06:06:17.779100] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:52.423 [2024-10-01 06:06:17.779115] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:52.423 [2024-10-01 06:06:17.779124] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:52.423 [2024-10-01 06:06:17.787971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:52.423 [2024-10-01 06:06:17.788005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:52.423 [2024-10-01 06:06:17.794870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:52.423 [2024-10-01 06:06:17.795021] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:52.423 [2024-10-01 06:06:17.806869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:52.423 1 00:13:52.423 06:06:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.423 06:06:17 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:53.367 06:06:18 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82888 00:13:53.367 06:06:18 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:53.367 06:06:18 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:53.367 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:53.367 fio-3.35 00:13:53.367 Starting 1 process 00:13:58.652 06:06:23 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82859 00:13:58.652 06:06:23 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:03.939 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82859 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:03.939 06:06:28 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82997 00:14:03.939 06:06:28 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:03.939 06:06:28 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.939 06:06:28 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82997 00:14:03.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.939 06:06:28 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82997 ']' 00:14:03.939 06:06:28 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.939 06:06:28 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.939 06:06:28 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.939 06:06:28 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.939 06:06:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.939 [2024-10-01 06:06:28.912057] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:03.939 [2024-10-01 06:06:28.912211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82997 ] 00:14:03.939 [2024-10-01 06:06:29.048159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:03.939 [2024-10-01 06:06:29.118817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.939 [2024-10-01 06:06:29.118827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.201 06:06:29 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.201 06:06:29 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:04.201 06:06:29 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:04.201 06:06:29 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.201 06:06:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.201 [2024-10-01 06:06:29.781879] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:04.201 [2024-10-01 06:06:29.784074] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:04.201 06:06:29 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.201 06:06:29 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:04.201 06:06:29 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.201 06:06:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.462 malloc0 00:14:04.462 06:06:29 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.462 06:06:29 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:04.462 06:06:29 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.462 06:06:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.462 [2024-10-01 06:06:29.847064] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:04.462 [2024-10-01 06:06:29.847123] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:04.462 [2024-10-01 06:06:29.847133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:04.462 [2024-10-01 06:06:29.855926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:04.462 [2024-10-01 06:06:29.855954] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:04.462 1 00:14:04.462 06:06:29 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.462 06:06:29 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82888 00:14:05.407 [2024-10-01 06:06:30.856027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:05.407 [2024-10-01 06:06:30.859909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:05.407 [2024-10-01 06:06:30.859937] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:06.350 [2024-10-01 06:06:31.859977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:06.350 [2024-10-01 06:06:31.861871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:06.350 [2024-10-01 06:06:31.861885] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:07.285 [2024-10-01 06:06:32.861939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:07.285 [2024-10-01 06:06:32.869875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:07.285 [2024-10-01 06:06:32.869903] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:07.285 [2024-10-01 06:06:32.869911] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:07.285 [2024-10-01 06:06:32.869989] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:29.212 [2024-10-01 06:06:54.267882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:29.212 [2024-10-01 06:06:54.274510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:29.212 [2024-10-01 06:06:54.282161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:29.212 [2024-10-01 06:06:54.282183] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:55.748 00:14:55.748 fio_test: (groupid=0, jobs=1): err= 0: pid=82895: Tue Oct 1 06:07:19 2024 00:14:55.748 read: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(3136MiB/60002msec) 00:14:55.748 slat (nsec): min=1082, max=268290, avg=5578.95, stdev=1495.70 00:14:55.748 clat (usec): min=968, max=30470k, avg=4923.91, stdev=282440.29 00:14:55.748 lat (usec): min=972, max=30470k, avg=4929.49, stdev=282440.30 00:14:55.748 clat percentiles (usec): 00:14:55.748 | 1.00th=[ 1893], 5.00th=[ 2040], 10.00th=[ 2073], 20.00th=[ 2114], 00:14:55.748 | 30.00th=[ 2114], 40.00th=[ 2147], 50.00th=[ 2147], 60.00th=[ 2180], 00:14:55.748 | 70.00th=[ 2212], 80.00th=[ 2212], 90.00th=[ 2638], 95.00th=[ 3228], 00:14:55.748 | 99.00th=[ 5342], 99.50th=[ 5735], 99.90th=[ 7308], 99.95th=[ 8455], 00:14:55.748 | 99.99th=[12649] 00:14:55.748 bw ( KiB/s): min=30376, max=114336, per=100.00%, avg=107235.39, stdev=14744.24, samples=59 00:14:55.748 iops : min= 7594, max=28584, avg=26808.86, stdev=3686.08, samples=59 00:14:55.748 write: IOPS=13.4k, BW=52.2MiB/s (54.7MB/s)(3131MiB/60002msec); 0 zone resets 00:14:55.748 slat (nsec): min=1108, max=691095, avg=5814.09, stdev=1744.90 00:14:55.748 clat (usec): min=518, max=30470k, avg=4638.34, stdev=261392.16 00:14:55.748 lat (usec): min=530, max=30470k, avg=4644.16, stdev=261392.16 00:14:55.748 clat percentiles (usec): 00:14:55.748 | 1.00th=[ 1942], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2212], 00:14:55.748 | 30.00th=[ 2245], 40.00th=[ 2245], 50.00th=[ 2278], 60.00th=[ 2278], 00:14:55.748 | 70.00th=[ 2311], 80.00th=[ 2343], 90.00th=[ 2704], 95.00th=[ 3163], 00:14:55.748 | 99.00th=[ 5407], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8455], 00:14:55.748 | 99.99th=[12518] 00:14:55.748 bw ( KiB/s): min=31080, max=113920, per=100.00%, avg=107072.58, stdev=14495.40, samples=59 00:14:55.748 iops : min= 7770, max=28480, avg=26768.08, stdev=3623.85, samples=59 00:14:55.748 lat (usec) : 750=0.01%, 1000=0.01% 00:14:55.748 lat (msec) : 2=1.97%, 4=95.10%, 10=2.92%, 20=0.01%, >=2000=0.01% 00:14:55.748 cpu : usr=2.85%, sys=15.56%, ctx=52822, majf=0, minf=13 00:14:55.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:55.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.748 issued rwts: total=802788,801455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.748 00:14:55.748 Run status group 0 (all jobs): 00:14:55.748 READ: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=3136MiB (3288MB), run=60002-60002msec 00:14:55.748 WRITE: bw=52.2MiB/s (54.7MB/s), 52.2MiB/s-52.2MiB/s (54.7MB/s-54.7MB/s), io=3131MiB (3283MB), run=60002-60002msec 00:14:55.748 00:14:55.748 Disk stats (read/write): 00:14:55.748 ublkb1: ios=799955/798706, merge=0/0, ticks=3902297/3597946, in_queue=7500244, util=99.95% 00:14:55.748 06:07:19 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:55.748 06:07:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.748 06:07:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:55.748 [2024-10-01 06:07:19.064599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:55.748 [2024-10-01 06:07:19.098930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:55.748 [2024-10-01 06:07:19.099123] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:55.748 [2024-10-01 06:07:19.106897] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:55.748 [2024-10-01 06:07:19.107030] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:55.748 [2024-10-01 06:07:19.107042] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:55.748 06:07:19 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.748 06:07:19 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:55.748 06:07:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:55.749 [2024-10-01 06:07:19.122983] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:55.749 [2024-10-01 06:07:19.124358] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:55.749 [2024-10-01 06:07:19.124392] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.749 06:07:19 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:55.749 06:07:19 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:55.749 06:07:19 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82997 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 82997 ']' 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 82997 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82997 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.749 killing process with pid 82997 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82997' 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@969 -- # kill 82997 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@974 -- # wait 82997 00:14:55.749 [2024-10-01 06:07:19.390259] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:55.749 [2024-10-01 06:07:19.390317] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:55.749 00:14:55.749 real 1m3.116s 00:14:55.749 user 1m44.142s 00:14:55.749 sys 0m22.573s 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.749 ************************************ 00:14:55.749 END TEST ublk_recovery 00:14:55.749 ************************************ 00:14:55.749 06:07:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:55.749 06:07:19 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:55.749 06:07:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:55.749 06:07:19 -- common/autotest_common.sh@10 -- # set +x 00:14:55.749 06:07:19 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:14:55.749 06:07:19 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:55.749 06:07:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:55.749 06:07:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.749 06:07:19 -- common/autotest_common.sh@10 -- # set +x 00:14:55.749 ************************************ 00:14:55.749 START TEST ftl 00:14:55.749 ************************************ 00:14:55.749 06:07:19 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:55.749 * Looking for test storage... 00:14:55.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:55.749 06:07:19 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:55.749 06:07:19 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:55.749 06:07:19 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:14:55.749 06:07:19 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:55.749 06:07:19 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.749 06:07:19 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.749 06:07:19 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.749 06:07:19 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.749 06:07:19 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.749 06:07:19 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.749 06:07:19 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.749 06:07:19 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.749 06:07:19 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.749 06:07:19 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.749 06:07:19 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.749 06:07:19 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:55.749 06:07:19 ftl -- scripts/common.sh@345 -- # : 1 00:14:55.749 06:07:19 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.749 06:07:19 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.749 06:07:19 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:55.749 06:07:19 ftl -- scripts/common.sh@353 -- # local d=1 00:14:55.749 06:07:19 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.749 06:07:19 ftl -- scripts/common.sh@355 -- # echo 1 00:14:55.749 06:07:19 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.749 06:07:19 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:55.749 06:07:19 ftl -- scripts/common.sh@353 -- # local d=2 00:14:55.749 06:07:19 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.749 06:07:20 ftl -- scripts/common.sh@355 -- # echo 2 00:14:55.749 06:07:20 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.749 06:07:20 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.749 06:07:20 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.749 06:07:20 ftl -- scripts/common.sh@368 -- # return 0 00:14:55.749 06:07:20 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.749 06:07:20 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:55.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.749 --rc genhtml_branch_coverage=1 00:14:55.749 --rc genhtml_function_coverage=1 00:14:55.749 --rc genhtml_legend=1 00:14:55.749 --rc geninfo_all_blocks=1 00:14:55.749 --rc geninfo_unexecuted_blocks=1 00:14:55.749 00:14:55.749 ' 00:14:55.749 06:07:20 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:55.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.749 --rc genhtml_branch_coverage=1 00:14:55.749 --rc genhtml_function_coverage=1 00:14:55.749 --rc genhtml_legend=1 00:14:55.749 --rc geninfo_all_blocks=1 00:14:55.749 --rc geninfo_unexecuted_blocks=1 00:14:55.749 00:14:55.749 ' 00:14:55.749 06:07:20 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:55.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.749 --rc genhtml_branch_coverage=1 00:14:55.749 --rc genhtml_function_coverage=1 00:14:55.749 --rc genhtml_legend=1 00:14:55.749 --rc geninfo_all_blocks=1 00:14:55.749 --rc geninfo_unexecuted_blocks=1 00:14:55.749 00:14:55.749 ' 00:14:55.749 06:07:20 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:55.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.749 --rc genhtml_branch_coverage=1 00:14:55.749 --rc genhtml_function_coverage=1 00:14:55.749 --rc genhtml_legend=1 00:14:55.749 --rc geninfo_all_blocks=1 00:14:55.749 --rc geninfo_unexecuted_blocks=1 00:14:55.749 00:14:55.749 ' 00:14:55.749 06:07:20 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:55.749 06:07:20 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:55.749 06:07:20 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:55.749 06:07:20 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:55.749 06:07:20 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:55.749 06:07:20 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:55.749 06:07:20 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.749 06:07:20 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:55.749 06:07:20 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:55.749 06:07:20 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:55.749 06:07:20 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:55.749 06:07:20 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:55.749 06:07:20 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:55.749 06:07:20 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:55.749 06:07:20 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:55.749 06:07:20 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:55.749 06:07:20 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:55.749 06:07:20 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:55.749 06:07:20 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:55.749 06:07:20 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:55.749 06:07:20 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:55.749 06:07:20 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:55.749 06:07:20 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:55.749 06:07:20 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:55.749 06:07:20 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:55.749 06:07:20 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:55.749 06:07:20 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:55.749 06:07:20 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:55.749 06:07:20 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:55.749 06:07:20 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.749 06:07:20 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:55.749 06:07:20 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:55.749 06:07:20 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:55.749 06:07:20 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:55.749 06:07:20 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:55.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:55.749 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.750 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.750 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.750 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.750 06:07:20 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83791 00:14:55.750 06:07:20 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83791 00:14:55.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.750 06:07:20 ftl -- common/autotest_common.sh@831 -- # '[' -z 83791 ']' 00:14:55.750 06:07:20 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.750 06:07:20 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.750 06:07:20 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.750 06:07:20 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.750 06:07:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:55.750 06:07:20 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:55.750 [2024-10-01 06:07:20.537318] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:55.750 [2024-10-01 06:07:20.537452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83791 ] 00:14:55.750 [2024-10-01 06:07:20.670021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.750 [2024-10-01 06:07:20.713550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.008 06:07:21 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.008 06:07:21 ftl -- common/autotest_common.sh@864 -- # return 0 00:14:56.008 06:07:21 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:56.008 06:07:21 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:56.575 06:07:21 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:56.575 06:07:21 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:56.834 06:07:22 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:56.834 06:07:22 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:56.834 06:07:22 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:57.092 06:07:22 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:57.092 06:07:22 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:57.092 06:07:22 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:57.092 06:07:22 ftl -- ftl/ftl.sh@50 -- # break 00:14:57.092 06:07:22 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:57.092 06:07:22 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:57.092 06:07:22 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:57.092 06:07:22 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:57.350 06:07:22 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:57.350 06:07:22 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:57.350 06:07:22 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:57.350 06:07:22 ftl -- ftl/ftl.sh@63 -- # break 00:14:57.350 06:07:22 ftl -- ftl/ftl.sh@66 -- # killprocess 83791 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@950 -- # '[' -z 83791 ']' 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@954 -- # kill -0 83791 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@955 -- # uname 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83791 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:57.350 killing process with pid 83791 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83791' 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@969 -- # kill 83791 00:14:57.350 06:07:22 ftl -- common/autotest_common.sh@974 -- # wait 83791 00:14:57.608 06:07:23 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:57.608 06:07:23 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:57.608 06:07:23 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:57.608 06:07:23 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.608 06:07:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:57.608 ************************************ 00:14:57.608 START TEST ftl_fio_basic 00:14:57.608 ************************************ 00:14:57.608 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:57.608 * Looking for test storage... 00:14:57.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:57.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.867 --rc genhtml_branch_coverage=1 00:14:57.867 --rc genhtml_function_coverage=1 00:14:57.867 --rc genhtml_legend=1 00:14:57.867 --rc geninfo_all_blocks=1 00:14:57.867 --rc geninfo_unexecuted_blocks=1 00:14:57.867 00:14:57.867 ' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:57.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.867 --rc genhtml_branch_coverage=1 00:14:57.867 --rc genhtml_function_coverage=1 00:14:57.867 --rc genhtml_legend=1 00:14:57.867 --rc geninfo_all_blocks=1 00:14:57.867 --rc geninfo_unexecuted_blocks=1 00:14:57.867 00:14:57.867 ' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:57.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.867 --rc genhtml_branch_coverage=1 00:14:57.867 --rc genhtml_function_coverage=1 00:14:57.867 --rc genhtml_legend=1 00:14:57.867 --rc geninfo_all_blocks=1 00:14:57.867 --rc geninfo_unexecuted_blocks=1 00:14:57.867 00:14:57.867 ' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:57.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.867 --rc genhtml_branch_coverage=1 00:14:57.867 --rc genhtml_function_coverage=1 00:14:57.867 --rc genhtml_legend=1 00:14:57.867 --rc geninfo_all_blocks=1 00:14:57.867 --rc geninfo_unexecuted_blocks=1 00:14:57.867 00:14:57.867 ' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:57.867 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83912 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83912 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 83912 ']' 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.868 06:07:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:57.868 [2024-10-01 06:07:23.395155] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:57.868 [2024-10-01 06:07:23.395457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83912 ] 00:14:58.126 [2024-10-01 06:07:23.531086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.126 [2024-10-01 06:07:23.575941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.126 [2024-10-01 06:07:23.576060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.126 [2024-10-01 06:07:23.576030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.690 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.690 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:14:58.690 06:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:58.690 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:58.690 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:58.690 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:58.690 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:58.690 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:58.947 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:58.947 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:58.947 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:58.947 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:14:58.947 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:58.947 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:58.947 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:58.947 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:59.205 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:59.205 { 00:14:59.205 "name": "nvme0n1", 00:14:59.205 "aliases": [ 00:14:59.205 "a1070d07-323d-48be-adac-ea38bc318fa2" 00:14:59.205 ], 00:14:59.205 "product_name": "NVMe disk", 00:14:59.205 "block_size": 4096, 00:14:59.205 "num_blocks": 1310720, 00:14:59.205 "uuid": "a1070d07-323d-48be-adac-ea38bc318fa2", 00:14:59.205 "numa_id": -1, 00:14:59.205 "assigned_rate_limits": { 00:14:59.205 "rw_ios_per_sec": 0, 00:14:59.205 "rw_mbytes_per_sec": 0, 00:14:59.205 "r_mbytes_per_sec": 0, 00:14:59.205 "w_mbytes_per_sec": 0 00:14:59.205 }, 00:14:59.205 "claimed": false, 00:14:59.205 "zoned": false, 00:14:59.205 "supported_io_types": { 00:14:59.205 "read": true, 00:14:59.205 "write": true, 00:14:59.205 "unmap": true, 00:14:59.205 "flush": true, 00:14:59.205 "reset": true, 00:14:59.205 "nvme_admin": true, 00:14:59.205 "nvme_io": true, 00:14:59.205 "nvme_io_md": false, 00:14:59.205 "write_zeroes": true, 00:14:59.205 "zcopy": false, 00:14:59.205 "get_zone_info": false, 00:14:59.205 "zone_management": false, 00:14:59.205 "zone_append": false, 00:14:59.205 "compare": true, 00:14:59.205 "compare_and_write": false, 00:14:59.205 "abort": true, 00:14:59.205 "seek_hole": false, 00:14:59.205 "seek_data": false, 00:14:59.205 "copy": true, 00:14:59.205 "nvme_iov_md": false 00:14:59.205 }, 00:14:59.205 "driver_specific": { 00:14:59.205 "nvme": [ 00:14:59.205 { 00:14:59.205 "pci_address": "0000:00:11.0", 00:14:59.205 "trid": { 00:14:59.205 "trtype": "PCIe", 00:14:59.205 "traddr": "0000:00:11.0" 00:14:59.205 }, 00:14:59.205 "ctrlr_data": { 00:14:59.205 "cntlid": 0, 00:14:59.205 "vendor_id": "0x1b36", 00:14:59.206 "model_number": "QEMU NVMe Ctrl", 00:14:59.206 "serial_number": "12341", 00:14:59.206 "firmware_revision": "8.0.0", 00:14:59.206 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:59.206 "oacs": { 00:14:59.206 "security": 0, 00:14:59.206 "format": 1, 00:14:59.206 "firmware": 0, 00:14:59.206 "ns_manage": 1 00:14:59.206 }, 00:14:59.206 "multi_ctrlr": false, 00:14:59.206 "ana_reporting": false 00:14:59.206 }, 00:14:59.206 "vs": { 00:14:59.206 "nvme_version": "1.4" 00:14:59.206 }, 00:14:59.206 "ns_data": { 00:14:59.206 "id": 1, 00:14:59.206 "can_share": false 00:14:59.206 } 00:14:59.206 } 00:14:59.206 ], 00:14:59.206 "mp_policy": "active_passive" 00:14:59.206 } 00:14:59.206 } 00:14:59.206 ]' 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:59.206 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:59.463 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:59.463 06:07:24 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:59.721 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=fc2ef154-570a-44ee-b19c-a599277bbe62 00:14:59.722 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fc2ef154-570a-44ee-b19c-a599277bbe62 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=5fd471de-eb0f-43ea-bd24-d8504901aece 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5fd471de-eb0f-43ea-bd24-d8504901aece 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=5fd471de-eb0f-43ea-bd24-d8504901aece 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 5fd471de-eb0f-43ea-bd24-d8504901aece 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5fd471de-eb0f-43ea-bd24-d8504901aece 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5fd471de-eb0f-43ea-bd24-d8504901aece 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:59.981 { 00:14:59.981 "name": "5fd471de-eb0f-43ea-bd24-d8504901aece", 00:14:59.981 "aliases": [ 00:14:59.981 "lvs/nvme0n1p0" 00:14:59.981 ], 00:14:59.981 "product_name": "Logical Volume", 00:14:59.981 "block_size": 4096, 00:14:59.981 "num_blocks": 26476544, 00:14:59.981 "uuid": "5fd471de-eb0f-43ea-bd24-d8504901aece", 00:14:59.981 "assigned_rate_limits": { 00:14:59.981 "rw_ios_per_sec": 0, 00:14:59.981 "rw_mbytes_per_sec": 0, 00:14:59.981 "r_mbytes_per_sec": 0, 00:14:59.981 "w_mbytes_per_sec": 0 00:14:59.981 }, 00:14:59.981 "claimed": false, 00:14:59.981 "zoned": false, 00:14:59.981 "supported_io_types": { 00:14:59.981 "read": true, 00:14:59.981 "write": true, 00:14:59.981 "unmap": true, 00:14:59.981 "flush": false, 00:14:59.981 "reset": true, 00:14:59.981 "nvme_admin": false, 00:14:59.981 "nvme_io": false, 00:14:59.981 "nvme_io_md": false, 00:14:59.981 "write_zeroes": true, 00:14:59.981 "zcopy": false, 00:14:59.981 "get_zone_info": false, 00:14:59.981 "zone_management": false, 00:14:59.981 "zone_append": false, 00:14:59.981 "compare": false, 00:14:59.981 "compare_and_write": false, 00:14:59.981 "abort": false, 00:14:59.981 "seek_hole": true, 00:14:59.981 "seek_data": true, 00:14:59.981 "copy": false, 00:14:59.981 "nvme_iov_md": false 00:14:59.981 }, 00:14:59.981 "driver_specific": { 00:14:59.981 "lvol": { 00:14:59.981 "lvol_store_uuid": "fc2ef154-570a-44ee-b19c-a599277bbe62", 00:14:59.981 "base_bdev": "nvme0n1", 00:14:59.981 "thin_provision": true, 00:14:59.981 "num_allocated_clusters": 0, 00:14:59.981 "snapshot": false, 00:14:59.981 "clone": false, 00:14:59.981 "esnap_clone": false 00:14:59.981 } 00:14:59.981 } 00:14:59.981 } 00:14:59.981 ]' 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:59.981 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 5fd471de-eb0f-43ea-bd24-d8504901aece 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5fd471de-eb0f-43ea-bd24-d8504901aece 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:00.239 06:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5fd471de-eb0f-43ea-bd24-d8504901aece 00:15:00.497 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:00.497 { 00:15:00.497 "name": "5fd471de-eb0f-43ea-bd24-d8504901aece", 00:15:00.497 "aliases": [ 00:15:00.497 "lvs/nvme0n1p0" 00:15:00.497 ], 00:15:00.497 "product_name": "Logical Volume", 00:15:00.497 "block_size": 4096, 00:15:00.497 "num_blocks": 26476544, 00:15:00.497 "uuid": "5fd471de-eb0f-43ea-bd24-d8504901aece", 00:15:00.497 "assigned_rate_limits": { 00:15:00.497 "rw_ios_per_sec": 0, 00:15:00.497 "rw_mbytes_per_sec": 0, 00:15:00.497 "r_mbytes_per_sec": 0, 00:15:00.497 "w_mbytes_per_sec": 0 00:15:00.497 }, 00:15:00.497 "claimed": false, 00:15:00.497 "zoned": false, 00:15:00.497 "supported_io_types": { 00:15:00.497 "read": true, 00:15:00.497 "write": true, 00:15:00.497 "unmap": true, 00:15:00.497 "flush": false, 00:15:00.497 "reset": true, 00:15:00.497 "nvme_admin": false, 00:15:00.497 "nvme_io": false, 00:15:00.497 "nvme_io_md": false, 00:15:00.497 "write_zeroes": true, 00:15:00.497 "zcopy": false, 00:15:00.497 "get_zone_info": false, 00:15:00.497 "zone_management": false, 00:15:00.497 "zone_append": false, 00:15:00.497 "compare": false, 00:15:00.497 "compare_and_write": false, 00:15:00.497 "abort": false, 00:15:00.497 "seek_hole": true, 00:15:00.497 "seek_data": true, 00:15:00.497 "copy": false, 00:15:00.497 "nvme_iov_md": false 00:15:00.497 }, 00:15:00.497 "driver_specific": { 00:15:00.497 "lvol": { 00:15:00.497 "lvol_store_uuid": "fc2ef154-570a-44ee-b19c-a599277bbe62", 00:15:00.497 "base_bdev": "nvme0n1", 00:15:00.497 "thin_provision": true, 00:15:00.497 "num_allocated_clusters": 0, 00:15:00.497 "snapshot": false, 00:15:00.497 "clone": false, 00:15:00.498 "esnap_clone": false 00:15:00.498 } 00:15:00.498 } 00:15:00.498 } 00:15:00.498 ]' 00:15:00.498 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:00.498 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:00.498 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:00.756 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 5fd471de-eb0f-43ea-bd24-d8504901aece 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5fd471de-eb0f-43ea-bd24-d8504901aece 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:00.756 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5fd471de-eb0f-43ea-bd24-d8504901aece 00:15:01.013 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:01.014 { 00:15:01.014 "name": "5fd471de-eb0f-43ea-bd24-d8504901aece", 00:15:01.014 "aliases": [ 00:15:01.014 "lvs/nvme0n1p0" 00:15:01.014 ], 00:15:01.014 "product_name": "Logical Volume", 00:15:01.014 "block_size": 4096, 00:15:01.014 "num_blocks": 26476544, 00:15:01.014 "uuid": "5fd471de-eb0f-43ea-bd24-d8504901aece", 00:15:01.014 "assigned_rate_limits": { 00:15:01.014 "rw_ios_per_sec": 0, 00:15:01.014 "rw_mbytes_per_sec": 0, 00:15:01.014 "r_mbytes_per_sec": 0, 00:15:01.014 "w_mbytes_per_sec": 0 00:15:01.014 }, 00:15:01.014 "claimed": false, 00:15:01.014 "zoned": false, 00:15:01.014 "supported_io_types": { 00:15:01.014 "read": true, 00:15:01.014 "write": true, 00:15:01.014 "unmap": true, 00:15:01.014 "flush": false, 00:15:01.014 "reset": true, 00:15:01.014 "nvme_admin": false, 00:15:01.014 "nvme_io": false, 00:15:01.014 "nvme_io_md": false, 00:15:01.014 "write_zeroes": true, 00:15:01.014 "zcopy": false, 00:15:01.014 "get_zone_info": false, 00:15:01.014 "zone_management": false, 00:15:01.014 "zone_append": false, 00:15:01.014 "compare": false, 00:15:01.014 "compare_and_write": false, 00:15:01.014 "abort": false, 00:15:01.014 "seek_hole": true, 00:15:01.014 "seek_data": true, 00:15:01.014 "copy": false, 00:15:01.014 "nvme_iov_md": false 00:15:01.014 }, 00:15:01.014 "driver_specific": { 00:15:01.014 "lvol": { 00:15:01.014 "lvol_store_uuid": "fc2ef154-570a-44ee-b19c-a599277bbe62", 00:15:01.014 "base_bdev": "nvme0n1", 00:15:01.014 "thin_provision": true, 00:15:01.014 "num_allocated_clusters": 0, 00:15:01.014 "snapshot": false, 00:15:01.014 "clone": false, 00:15:01.014 "esnap_clone": false 00:15:01.014 } 00:15:01.014 } 00:15:01.014 } 00:15:01.014 ]' 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:01.014 06:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5fd471de-eb0f-43ea-bd24-d8504901aece -c nvc0n1p0 --l2p_dram_limit 60 00:15:01.272 [2024-10-01 06:07:26.656084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.272 [2024-10-01 06:07:26.656145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:01.272 [2024-10-01 06:07:26.656158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:01.272 [2024-10-01 06:07:26.656175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.272 [2024-10-01 06:07:26.656256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.272 [2024-10-01 06:07:26.656266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:01.272 [2024-10-01 06:07:26.656275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:15:01.272 [2024-10-01 06:07:26.656295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.272 [2024-10-01 06:07:26.656328] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:01.273 [2024-10-01 06:07:26.656614] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:01.273 [2024-10-01 06:07:26.656628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.656636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:01.273 [2024-10-01 06:07:26.656643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:15:01.273 [2024-10-01 06:07:26.656651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.656689] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0e1df113-0ecc-4d2d-8e1d-31e62f204503 00:15:01.273 [2024-10-01 06:07:26.658087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.658208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:01.273 [2024-10-01 06:07:26.658226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:01.273 [2024-10-01 06:07:26.658233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.665197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.665296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:01.273 [2024-10-01 06:07:26.665310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:15:01.273 [2024-10-01 06:07:26.665317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.665416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.665428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:01.273 [2024-10-01 06:07:26.665437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:15:01.273 [2024-10-01 06:07:26.665442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.665505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.665513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:01.273 [2024-10-01 06:07:26.665521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:01.273 [2024-10-01 06:07:26.665528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.665558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:01.273 [2024-10-01 06:07:26.667205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.667232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:01.273 [2024-10-01 06:07:26.667241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:15:01.273 [2024-10-01 06:07:26.667249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.667289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.667298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:01.273 [2024-10-01 06:07:26.667305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:01.273 [2024-10-01 06:07:26.667314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.667338] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:01.273 [2024-10-01 06:07:26.667454] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:01.273 [2024-10-01 06:07:26.667464] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:01.273 [2024-10-01 06:07:26.667474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:01.273 [2024-10-01 06:07:26.667494] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:01.273 [2024-10-01 06:07:26.667503] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:01.273 [2024-10-01 06:07:26.667509] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:01.273 [2024-10-01 06:07:26.667537] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:01.273 [2024-10-01 06:07:26.667543] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:01.273 [2024-10-01 06:07:26.667551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:01.273 [2024-10-01 06:07:26.667557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.667564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:01.273 [2024-10-01 06:07:26.667571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:15:01.273 [2024-10-01 06:07:26.667578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.667653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.273 [2024-10-01 06:07:26.667665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:01.273 [2024-10-01 06:07:26.667672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:01.273 [2024-10-01 06:07:26.667679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.273 [2024-10-01 06:07:26.667789] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:01.273 [2024-10-01 06:07:26.667798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:01.273 [2024-10-01 06:07:26.667805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:01.273 [2024-10-01 06:07:26.667812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.273 [2024-10-01 06:07:26.667818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:01.273 [2024-10-01 06:07:26.667825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:01.273 [2024-10-01 06:07:26.667831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:01.273 [2024-10-01 06:07:26.667838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:01.273 [2024-10-01 06:07:26.667861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:01.273 [2024-10-01 06:07:26.667871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:01.273 [2024-10-01 06:07:26.667878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:01.273 [2024-10-01 06:07:26.667886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:01.273 [2024-10-01 06:07:26.667892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:01.273 [2024-10-01 06:07:26.667902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:01.273 [2024-10-01 06:07:26.667913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:01.273 [2024-10-01 06:07:26.667921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.273 [2024-10-01 06:07:26.667927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:01.273 [2024-10-01 06:07:26.667935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:01.273 [2024-10-01 06:07:26.667941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.273 [2024-10-01 06:07:26.667950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:01.273 [2024-10-01 06:07:26.667956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:01.273 [2024-10-01 06:07:26.667964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.273 [2024-10-01 06:07:26.667970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:01.273 [2024-10-01 06:07:26.667977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:01.273 [2024-10-01 06:07:26.667984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.273 [2024-10-01 06:07:26.667991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:01.273 [2024-10-01 06:07:26.667998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:01.273 [2024-10-01 06:07:26.668009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.273 [2024-10-01 06:07:26.668016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:01.273 [2024-10-01 06:07:26.668025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:01.273 [2024-10-01 06:07:26.668031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.273 [2024-10-01 06:07:26.668039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:01.273 [2024-10-01 06:07:26.668045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:01.273 [2024-10-01 06:07:26.668054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:01.273 [2024-10-01 06:07:26.668059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:01.273 [2024-10-01 06:07:26.668067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:01.273 [2024-10-01 06:07:26.668073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:01.273 [2024-10-01 06:07:26.668080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:01.273 [2024-10-01 06:07:26.668086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:01.273 [2024-10-01 06:07:26.668093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.273 [2024-10-01 06:07:26.668099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:01.273 [2024-10-01 06:07:26.668106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:01.273 [2024-10-01 06:07:26.668114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.273 [2024-10-01 06:07:26.668122] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:01.273 [2024-10-01 06:07:26.668129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:01.273 [2024-10-01 06:07:26.668138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:01.273 [2024-10-01 06:07:26.668156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.273 [2024-10-01 06:07:26.668165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:01.273 [2024-10-01 06:07:26.668171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:01.273 [2024-10-01 06:07:26.668178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:01.274 [2024-10-01 06:07:26.668184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:01.274 [2024-10-01 06:07:26.668192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:01.274 [2024-10-01 06:07:26.668197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:01.274 [2024-10-01 06:07:26.668208] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:01.274 [2024-10-01 06:07:26.668218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:01.274 [2024-10-01 06:07:26.668228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:01.274 [2024-10-01 06:07:26.668234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:01.274 [2024-10-01 06:07:26.668242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:01.274 [2024-10-01 06:07:26.668248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:01.274 [2024-10-01 06:07:26.668256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:01.274 [2024-10-01 06:07:26.668262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:01.274 [2024-10-01 06:07:26.668271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:01.274 [2024-10-01 06:07:26.668276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:01.274 [2024-10-01 06:07:26.668284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:01.274 [2024-10-01 06:07:26.668289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:01.274 [2024-10-01 06:07:26.668296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:01.274 [2024-10-01 06:07:26.668301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:01.274 [2024-10-01 06:07:26.668308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:01.274 [2024-10-01 06:07:26.668313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:01.274 [2024-10-01 06:07:26.668320] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:01.274 [2024-10-01 06:07:26.668327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:01.274 [2024-10-01 06:07:26.668334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:01.274 [2024-10-01 06:07:26.668341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:01.274 [2024-10-01 06:07:26.668348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:01.274 [2024-10-01 06:07:26.668356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:01.274 [2024-10-01 06:07:26.668364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.274 [2024-10-01 06:07:26.668370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:01.274 [2024-10-01 06:07:26.668379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:15:01.274 [2024-10-01 06:07:26.668387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.274 [2024-10-01 06:07:26.668459] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:01.274 [2024-10-01 06:07:26.668476] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:03.802 [2024-10-01 06:07:29.047925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.048204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:03.802 [2024-10-01 06:07:29.048228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2379.450 ms 00:15:03.802 [2024-10-01 06:07:29.048237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.066487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.066545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:03.802 [2024-10-01 06:07:29.066563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.140 ms 00:15:03.802 [2024-10-01 06:07:29.066573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.066725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.066737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:03.802 [2024-10-01 06:07:29.066765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:03.802 [2024-10-01 06:07:29.066772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.078812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.078899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:03.802 [2024-10-01 06:07:29.078919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.971 ms 00:15:03.802 [2024-10-01 06:07:29.078931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.078996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.079008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:03.802 [2024-10-01 06:07:29.079022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:03.802 [2024-10-01 06:07:29.079033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.079534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.079560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:03.802 [2024-10-01 06:07:29.079578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:15:03.802 [2024-10-01 06:07:29.079590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.079794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.079822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:03.802 [2024-10-01 06:07:29.079839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:15:03.802 [2024-10-01 06:07:29.079866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.087303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.087477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:03.802 [2024-10-01 06:07:29.087496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.391 ms 00:15:03.802 [2024-10-01 06:07:29.087504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.096625] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:03.802 [2024-10-01 06:07:29.113967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.114182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:03.802 [2024-10-01 06:07:29.114199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.334 ms 00:15:03.802 [2024-10-01 06:07:29.114211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.153354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.153599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:03.802 [2024-10-01 06:07:29.153621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.101 ms 00:15:03.802 [2024-10-01 06:07:29.153634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.153830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.153876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:03.802 [2024-10-01 06:07:29.153885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:15:03.802 [2024-10-01 06:07:29.153908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.156916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.156956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:03.802 [2024-10-01 06:07:29.156978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:15:03.802 [2024-10-01 06:07:29.156992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.159277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.159314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:03.802 [2024-10-01 06:07:29.159324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:15:03.802 [2024-10-01 06:07:29.159332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.159667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.159685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:03.802 [2024-10-01 06:07:29.159694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:15:03.802 [2024-10-01 06:07:29.159705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.183407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.183470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:03.802 [2024-10-01 06:07:29.183484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.660 ms 00:15:03.802 [2024-10-01 06:07:29.183508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.187516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.187557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:03.802 [2024-10-01 06:07:29.187569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.924 ms 00:15:03.802 [2024-10-01 06:07:29.187580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.190797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.190835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:03.802 [2024-10-01 06:07:29.190868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.170 ms 00:15:03.802 [2024-10-01 06:07:29.190879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.193836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.193909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:03.802 [2024-10-01 06:07:29.193920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:15:03.802 [2024-10-01 06:07:29.193933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.193981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.193993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:03.802 [2024-10-01 06:07:29.194003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:03.802 [2024-10-01 06:07:29.194012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.194094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.802 [2024-10-01 06:07:29.194106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:03.802 [2024-10-01 06:07:29.194116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:03.802 [2024-10-01 06:07:29.194126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.802 [2024-10-01 06:07:29.195178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2538.610 ms, result 0 00:15:03.802 { 00:15:03.802 "name": "ftl0", 00:15:03.802 "uuid": "0e1df113-0ecc-4d2d-8e1d-31e62f204503" 00:15:03.802 } 00:15:03.802 06:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:03.802 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:03.802 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:03.802 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:03.802 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:03.802 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:03.802 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:03.802 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:04.061 [ 00:15:04.061 { 00:15:04.061 "name": "ftl0", 00:15:04.061 "aliases": [ 00:15:04.061 "0e1df113-0ecc-4d2d-8e1d-31e62f204503" 00:15:04.061 ], 00:15:04.061 "product_name": "FTL disk", 00:15:04.061 "block_size": 4096, 00:15:04.061 "num_blocks": 20971520, 00:15:04.061 "uuid": "0e1df113-0ecc-4d2d-8e1d-31e62f204503", 00:15:04.061 "assigned_rate_limits": { 00:15:04.061 "rw_ios_per_sec": 0, 00:15:04.061 "rw_mbytes_per_sec": 0, 00:15:04.061 "r_mbytes_per_sec": 0, 00:15:04.061 "w_mbytes_per_sec": 0 00:15:04.061 }, 00:15:04.061 "claimed": false, 00:15:04.061 "zoned": false, 00:15:04.061 "supported_io_types": { 00:15:04.061 "read": true, 00:15:04.061 "write": true, 00:15:04.061 "unmap": true, 00:15:04.061 "flush": true, 00:15:04.061 "reset": false, 00:15:04.061 "nvme_admin": false, 00:15:04.061 "nvme_io": false, 00:15:04.061 "nvme_io_md": false, 00:15:04.061 "write_zeroes": true, 00:15:04.061 "zcopy": false, 00:15:04.061 "get_zone_info": false, 00:15:04.061 "zone_management": false, 00:15:04.061 "zone_append": false, 00:15:04.061 "compare": false, 00:15:04.061 "compare_and_write": false, 00:15:04.061 "abort": false, 00:15:04.061 "seek_hole": false, 00:15:04.061 "seek_data": false, 00:15:04.061 "copy": false, 00:15:04.061 "nvme_iov_md": false 00:15:04.061 }, 00:15:04.061 "driver_specific": { 00:15:04.061 "ftl": { 00:15:04.061 "base_bdev": "5fd471de-eb0f-43ea-bd24-d8504901aece", 00:15:04.061 "cache": "nvc0n1p0" 00:15:04.061 } 00:15:04.061 } 00:15:04.061 } 00:15:04.061 ] 00:15:04.061 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:04.061 06:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:04.061 06:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:04.322 06:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:04.322 06:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:04.322 [2024-10-01 06:07:29.905580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.905640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:04.322 [2024-10-01 06:07:29.905656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:04.322 [2024-10-01 06:07:29.905662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.905694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:04.322 [2024-10-01 06:07:29.906262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.906293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:04.322 [2024-10-01 06:07:29.906301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:15:04.322 [2024-10-01 06:07:29.906311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.906714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.906725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:04.322 [2024-10-01 06:07:29.906733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:15:04.322 [2024-10-01 06:07:29.906742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.909178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.909216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:04.322 [2024-10-01 06:07:29.909224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.401 ms 00:15:04.322 [2024-10-01 06:07:29.909232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.913873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.913899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:04.322 [2024-10-01 06:07:29.913908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:15:04.322 [2024-10-01 06:07:29.913917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.915696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.915730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:04.322 [2024-10-01 06:07:29.915737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:15:04.322 [2024-10-01 06:07:29.915747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.920034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.920160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:04.322 [2024-10-01 06:07:29.920173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.253 ms 00:15:04.322 [2024-10-01 06:07:29.920183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.920333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.920342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:04.322 [2024-10-01 06:07:29.920349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:15:04.322 [2024-10-01 06:07:29.920357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.921799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.921828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:04.322 [2024-10-01 06:07:29.921836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:15:04.322 [2024-10-01 06:07:29.921852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.922867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.922895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:04.322 [2024-10-01 06:07:29.922904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:15:04.322 [2024-10-01 06:07:29.922912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.923683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.923786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:04.322 [2024-10-01 06:07:29.923797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:15:04.322 [2024-10-01 06:07:29.923804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.924607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.322 [2024-10-01 06:07:29.924631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:04.322 [2024-10-01 06:07:29.924639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:15:04.322 [2024-10-01 06:07:29.924646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.322 [2024-10-01 06:07:29.924678] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:04.322 [2024-10-01 06:07:29.924692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:04.322 [2024-10-01 06:07:29.924806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.924988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:04.323 [2024-10-01 06:07:29.925455] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:04.323 [2024-10-01 06:07:29.925463] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0e1df113-0ecc-4d2d-8e1d-31e62f204503 00:15:04.323 [2024-10-01 06:07:29.925471] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:04.324 [2024-10-01 06:07:29.925477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:04.324 [2024-10-01 06:07:29.925487] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:04.324 [2024-10-01 06:07:29.925494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:04.324 [2024-10-01 06:07:29.925510] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:04.324 [2024-10-01 06:07:29.925516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:04.324 [2024-10-01 06:07:29.925523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:04.324 [2024-10-01 06:07:29.925528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:04.324 [2024-10-01 06:07:29.925535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:04.324 [2024-10-01 06:07:29.925541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.324 [2024-10-01 06:07:29.925548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:04.324 [2024-10-01 06:07:29.925555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:15:04.324 [2024-10-01 06:07:29.925563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.324 [2024-10-01 06:07:29.927363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.324 [2024-10-01 06:07:29.927385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:04.324 [2024-10-01 06:07:29.927392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:15:04.324 [2024-10-01 06:07:29.927399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.324 [2024-10-01 06:07:29.927492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.324 [2024-10-01 06:07:29.927501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:04.324 [2024-10-01 06:07:29.927508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:15:04.324 [2024-10-01 06:07:29.927515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.324 [2024-10-01 06:07:29.933588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.324 [2024-10-01 06:07:29.933626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:04.324 [2024-10-01 06:07:29.933635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.324 [2024-10-01 06:07:29.933644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.324 [2024-10-01 06:07:29.933714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.324 [2024-10-01 06:07:29.933724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:04.324 [2024-10-01 06:07:29.933731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.324 [2024-10-01 06:07:29.933738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.324 [2024-10-01 06:07:29.933820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.324 [2024-10-01 06:07:29.933836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:04.324 [2024-10-01 06:07:29.933842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.324 [2024-10-01 06:07:29.933862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.324 [2024-10-01 06:07:29.933885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.324 [2024-10-01 06:07:29.933893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:04.324 [2024-10-01 06:07:29.933901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.324 [2024-10-01 06:07:29.933909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.582 [2024-10-01 06:07:29.945334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.582 [2024-10-01 06:07:29.945528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:04.582 [2024-10-01 06:07:29.945541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.582 [2024-10-01 06:07:29.945549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.582 [2024-10-01 06:07:29.954684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.582 [2024-10-01 06:07:29.954732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:04.582 [2024-10-01 06:07:29.954743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.582 [2024-10-01 06:07:29.954751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.582 [2024-10-01 06:07:29.954834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.582 [2024-10-01 06:07:29.954857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:04.582 [2024-10-01 06:07:29.954868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.582 [2024-10-01 06:07:29.954875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.582 [2024-10-01 06:07:29.954942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.582 [2024-10-01 06:07:29.954952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:04.582 [2024-10-01 06:07:29.954959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.582 [2024-10-01 06:07:29.954966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.582 [2024-10-01 06:07:29.955043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.582 [2024-10-01 06:07:29.955052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:04.582 [2024-10-01 06:07:29.955059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.583 [2024-10-01 06:07:29.955069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.583 [2024-10-01 06:07:29.955119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.583 [2024-10-01 06:07:29.955129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:04.583 [2024-10-01 06:07:29.955137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.583 [2024-10-01 06:07:29.955144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.583 [2024-10-01 06:07:29.955182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.583 [2024-10-01 06:07:29.955194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:04.583 [2024-10-01 06:07:29.955200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.583 [2024-10-01 06:07:29.955218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.583 [2024-10-01 06:07:29.955270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:04.583 [2024-10-01 06:07:29.955280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:04.583 [2024-10-01 06:07:29.955287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:04.583 [2024-10-01 06:07:29.955295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.583 [2024-10-01 06:07:29.955453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.839 ms, result 0 00:15:04.583 true 00:15:04.583 06:07:29 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83912 00:15:04.583 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 83912 ']' 00:15:04.583 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 83912 00:15:04.583 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:04.583 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.583 06:07:29 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83912 00:15:04.583 killing process with pid 83912 00:15:04.583 06:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:04.583 06:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:04.583 06:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83912' 00:15:04.583 06:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 83912 00:15:04.583 06:07:30 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 83912 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:09.847 06:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:09.847 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:09.847 fio-3.35 00:15:09.847 Starting 1 thread 00:15:13.131 00:15:13.131 test: (groupid=0, jobs=1): err= 0: pid=84066: Tue Oct 1 06:07:38 2024 00:15:13.131 read: IOPS=1345, BW=89.3MiB/s (93.7MB/s)(255MiB/2849msec) 00:15:13.131 slat (nsec): min=3960, max=24709, avg=5206.10, stdev=1685.29 00:15:13.131 clat (usec): min=245, max=975, avg=322.22, stdev=57.33 00:15:13.131 lat (usec): min=250, max=979, avg=327.43, stdev=58.00 00:15:13.131 clat percentiles (usec): 00:15:13.131 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:15:13.131 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:15:13.131 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 424], 00:15:13.131 | 99.00th=[ 603], 99.50th=[ 660], 99.90th=[ 807], 99.95th=[ 922], 00:15:13.131 | 99.99th=[ 979] 00:15:13.131 write: IOPS=1354, BW=90.0MiB/s (94.3MB/s)(256MiB/2846msec); 0 zone resets 00:15:13.131 slat (usec): min=14, max=165, avg=23.47, stdev= 4.85 00:15:13.131 clat (usec): min=260, max=1047, avg=374.42, stdev=65.21 00:15:13.131 lat (usec): min=284, max=1077, avg=397.89, stdev=65.57 00:15:13.131 clat percentiles (usec): 00:15:13.131 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 338], 00:15:13.131 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:15:13.131 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 412], 95.00th=[ 453], 00:15:13.131 | 99.00th=[ 693], 99.50th=[ 775], 99.90th=[ 914], 99.95th=[ 922], 00:15:13.131 | 99.99th=[ 1045] 00:15:13.131 bw ( KiB/s): min=89488, max=94792, per=100.00%, avg=92480.00, stdev=2329.93, samples=5 00:15:13.131 iops : min= 1316, max= 1394, avg=1360.00, stdev=34.26, samples=5 00:15:13.131 lat (usec) : 250=0.05%, 500=96.93%, 750=2.60%, 1000=0.40% 00:15:13.131 lat (msec) : 2=0.01% 00:15:13.131 cpu : usr=99.30%, sys=0.07%, ctx=5, majf=0, minf=1326 00:15:13.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.131 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.131 00:15:13.131 Run status group 0 (all jobs): 00:15:13.131 READ: bw=89.3MiB/s (93.7MB/s), 89.3MiB/s-89.3MiB/s (93.7MB/s-93.7MB/s), io=255MiB (267MB), run=2849-2849msec 00:15:13.131 WRITE: bw=90.0MiB/s (94.3MB/s), 90.0MiB/s-90.0MiB/s (94.3MB/s-94.3MB/s), io=256MiB (269MB), run=2846-2846msec 00:15:13.698 ----------------------------------------------------- 00:15:13.698 Suppressions used: 00:15:13.698 count bytes template 00:15:13.698 1 5 /usr/src/fio/parse.c 00:15:13.698 1 8 libtcmalloc_minimal.so 00:15:13.698 1 904 libcrypto.so 00:15:13.698 ----------------------------------------------------- 00:15:13.698 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:13.698 06:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:13.698 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:13.698 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:13.698 fio-3.35 00:15:13.698 Starting 2 threads 00:15:40.232 00:15:40.232 first_half: (groupid=0, jobs=1): err= 0: pid=84147: Tue Oct 1 06:08:01 2024 00:15:40.232 read: IOPS=2990, BW=11.7MiB/s (12.2MB/s)(256MiB/21892msec) 00:15:40.232 slat (nsec): min=3045, max=23746, avg=5441.91, stdev=874.77 00:15:40.232 clat (usec): min=511, max=283230, avg=36424.94, stdev=22553.91 00:15:40.232 lat (usec): min=515, max=283235, avg=36430.38, stdev=22554.00 00:15:40.232 clat percentiles (msec): 00:15:40.232 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 31], 00:15:40.232 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:40.232 | 70.00th=[ 33], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 70], 00:15:40.232 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 209], 99.95th=[ 249], 00:15:40.232 | 99.99th=[ 279] 00:15:40.232 write: IOPS=2998, BW=11.7MiB/s (12.3MB/s)(256MiB/21858msec); 0 zone resets 00:15:40.232 slat (usec): min=3, max=146, avg= 6.48, stdev= 2.67 00:15:40.232 clat (usec): min=389, max=45883, avg=6344.04, stdev=6209.25 00:15:40.232 lat (usec): min=401, max=45890, avg=6350.52, stdev=6209.34 00:15:40.232 clat percentiles (usec): 00:15:40.232 | 1.00th=[ 725], 5.00th=[ 906], 10.00th=[ 1254], 20.00th=[ 2704], 00:15:40.232 | 30.00th=[ 3523], 40.00th=[ 4228], 50.00th=[ 4883], 60.00th=[ 5407], 00:15:40.232 | 70.00th=[ 5997], 80.00th=[ 7046], 90.00th=[13042], 95.00th=[17695], 00:15:40.232 | 99.00th=[32375], 99.50th=[35390], 99.90th=[43779], 99.95th=[44303], 00:15:40.232 | 99.99th=[45351] 00:15:40.232 bw ( KiB/s): min= 568, max=48120, per=98.64%, avg=23659.45, stdev=14600.03, samples=22 00:15:40.232 iops : min= 142, max=12030, avg=5914.86, stdev=3650.01, samples=22 00:15:40.232 lat (usec) : 500=0.05%, 750=0.65%, 1000=2.55% 00:15:40.232 lat (msec) : 2=3.88%, 4=11.23%, 10=24.82%, 20=6.00%, 50=47.60% 00:15:40.232 lat (msec) : 100=1.46%, 250=1.73%, 500=0.02% 00:15:40.232 cpu : usr=99.34%, sys=0.11%, ctx=40, majf=0, minf=5603 00:15:40.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:40.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.232 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.232 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.232 second_half: (groupid=0, jobs=1): err= 0: pid=84148: Tue Oct 1 06:08:01 2024 00:15:40.232 read: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(256MiB/21723msec) 00:15:40.232 slat (nsec): min=3106, max=46804, avg=4134.72, stdev=998.58 00:15:40.232 clat (msec): min=11, max=192, avg=36.70, stdev=19.98 00:15:40.232 lat (msec): min=11, max=192, avg=36.70, stdev=19.98 00:15:40.232 clat percentiles (msec): 00:15:40.232 | 1.00th=[ 26], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:15:40.232 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:15:40.232 | 70.00th=[ 33], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 66], 00:15:40.232 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 182], 00:15:40.232 | 99.99th=[ 190] 00:15:40.232 write: IOPS=3033, BW=11.8MiB/s (12.4MB/s)(256MiB/21603msec); 0 zone resets 00:15:40.232 slat (usec): min=3, max=416, avg= 5.64, stdev= 3.82 00:15:40.232 clat (usec): min=391, max=29672, avg=5740.66, stdev=3649.67 00:15:40.232 lat (usec): min=405, max=29677, avg=5746.30, stdev=3650.08 00:15:40.232 clat percentiles (usec): 00:15:40.232 | 1.00th=[ 906], 5.00th=[ 1729], 10.00th=[ 2474], 20.00th=[ 3261], 00:15:40.232 | 30.00th=[ 3884], 40.00th=[ 4424], 50.00th=[ 4883], 60.00th=[ 5342], 00:15:40.232 | 70.00th=[ 5735], 80.00th=[ 6652], 90.00th=[11600], 95.00th=[13435], 00:15:40.232 | 99.00th=[17695], 99.50th=[22676], 99.90th=[26870], 99.95th=[28443], 00:15:40.232 | 99.99th=[29230] 00:15:40.232 bw ( KiB/s): min= 2768, max=47648, per=100.00%, avg=24868.19, stdev=12931.90, samples=21 00:15:40.232 iops : min= 692, max=11912, avg=6217.05, stdev=3232.97, samples=21 00:15:40.232 lat (usec) : 500=0.02%, 750=0.18%, 1000=0.51% 00:15:40.232 lat (msec) : 2=2.46%, 4=12.73%, 10=26.79%, 20=7.03%, 50=47.02% 00:15:40.232 lat (msec) : 100=1.66%, 250=1.59% 00:15:40.232 cpu : usr=99.23%, sys=0.10%, ctx=46, majf=0, minf=5529 00:15:40.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:40.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.233 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.233 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.233 00:15:40.233 Run status group 0 (all jobs): 00:15:40.233 READ: bw=23.4MiB/s (24.5MB/s), 11.7MiB/s-11.8MiB/s (12.2MB/s-12.3MB/s), io=512MiB (536MB), run=21723-21892msec 00:15:40.233 WRITE: bw=23.4MiB/s (24.6MB/s), 11.7MiB/s-11.8MiB/s (12.3MB/s-12.4MB/s), io=512MiB (537MB), run=21603-21858msec 00:15:40.233 ----------------------------------------------------- 00:15:40.233 Suppressions used: 00:15:40.233 count bytes template 00:15:40.233 2 10 /usr/src/fio/parse.c 00:15:40.233 2 192 /usr/src/fio/iolog.c 00:15:40.233 1 8 libtcmalloc_minimal.so 00:15:40.233 1 904 libcrypto.so 00:15:40.233 ----------------------------------------------------- 00:15:40.233 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:40.233 06:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:40.233 06:08:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:40.233 06:08:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:40.233 06:08:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:40.233 06:08:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:40.233 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:40.233 fio-3.35 00:15:40.233 Starting 1 thread 00:15:52.423 00:15:52.423 test: (groupid=0, jobs=1): err= 0: pid=84428: Tue Oct 1 06:08:16 2024 00:15:52.423 read: IOPS=7702, BW=30.1MiB/s (31.5MB/s)(255MiB/8465msec) 00:15:52.423 slat (nsec): min=2975, max=24989, avg=4769.81, stdev=1080.68 00:15:52.423 clat (usec): min=526, max=32958, avg=16608.15, stdev=1653.63 00:15:52.423 lat (usec): min=535, max=32962, avg=16612.92, stdev=1653.65 00:15:52.423 clat percentiles (usec): 00:15:52.423 | 1.00th=[15401], 5.00th=[15533], 10.00th=[15664], 20.00th=[15795], 00:15:52.423 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16188], 60.00th=[16319], 00:15:52.423 | 70.00th=[16450], 80.00th=[16581], 90.00th=[17433], 95.00th=[20055], 00:15:52.423 | 99.00th=[23987], 99.50th=[24773], 99.90th=[25822], 99.95th=[28967], 00:15:52.423 | 99.99th=[32113] 00:15:52.423 write: IOPS=15.9k, BW=62.2MiB/s (65.3MB/s)(256MiB/4113msec); 0 zone resets 00:15:52.423 slat (usec): min=4, max=472, avg= 7.33, stdev= 3.29 00:15:52.423 clat (usec): min=488, max=47542, avg=7986.78, stdev=9868.41 00:15:52.423 lat (usec): min=494, max=47548, avg=7994.11, stdev=9868.37 00:15:52.423 clat percentiles (usec): 00:15:52.423 | 1.00th=[ 635], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 848], 00:15:52.423 | 30.00th=[ 1045], 40.00th=[ 1467], 50.00th=[ 5538], 60.00th=[ 6194], 00:15:52.423 | 70.00th=[ 7242], 80.00th=[ 8717], 90.00th=[28705], 95.00th=[30540], 00:15:52.423 | 99.00th=[33424], 99.50th=[34866], 99.90th=[39584], 99.95th=[40109], 00:15:52.423 | 99.99th=[46400] 00:15:52.423 bw ( KiB/s): min=11528, max=87440, per=91.40%, avg=58254.22, stdev=20385.42, samples=9 00:15:52.423 iops : min= 2882, max=21860, avg=14563.56, stdev=5096.36, samples=9 00:15:52.423 lat (usec) : 500=0.01%, 750=5.38%, 1000=8.52% 00:15:52.423 lat (msec) : 2=6.70%, 4=0.55%, 10=20.44%, 20=47.92%, 50=10.50% 00:15:52.423 cpu : usr=99.13%, sys=0.20%, ctx=18, majf=0, minf=5577 00:15:52.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:52.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.423 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:52.423 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:52.423 00:15:52.423 Run status group 0 (all jobs): 00:15:52.423 READ: bw=30.1MiB/s (31.5MB/s), 30.1MiB/s-30.1MiB/s (31.5MB/s-31.5MB/s), io=255MiB (267MB), run=8465-8465msec 00:15:52.423 WRITE: bw=62.2MiB/s (65.3MB/s), 62.2MiB/s-62.2MiB/s (65.3MB/s-65.3MB/s), io=256MiB (268MB), run=4113-4113msec 00:15:52.423 ----------------------------------------------------- 00:15:52.423 Suppressions used: 00:15:52.423 count bytes template 00:15:52.423 1 5 /usr/src/fio/parse.c 00:15:52.423 2 192 /usr/src/fio/iolog.c 00:15:52.423 1 8 libtcmalloc_minimal.so 00:15:52.423 1 904 libcrypto.so 00:15:52.423 ----------------------------------------------------- 00:15:52.423 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:52.423 Remove shared memory files 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69441 /dev/shm/spdk_tgt_trace.pid82859 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:52.423 06:08:17 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:52.423 ************************************ 00:15:52.423 END TEST ftl_fio_basic 00:15:52.423 ************************************ 00:15:52.423 00:15:52.423 real 0m54.214s 00:15:52.423 user 1m59.060s 00:15:52.423 sys 0m2.525s 00:15:52.424 06:08:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.424 06:08:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 06:08:17 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:52.424 06:08:17 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:52.424 06:08:17 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.424 06:08:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 ************************************ 00:15:52.424 START TEST ftl_bdevperf 00:15:52.424 ************************************ 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:52.424 * Looking for test storage... 00:15:52.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:52.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.424 --rc genhtml_branch_coverage=1 00:15:52.424 --rc genhtml_function_coverage=1 00:15:52.424 --rc genhtml_legend=1 00:15:52.424 --rc geninfo_all_blocks=1 00:15:52.424 --rc geninfo_unexecuted_blocks=1 00:15:52.424 00:15:52.424 ' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:52.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.424 --rc genhtml_branch_coverage=1 00:15:52.424 --rc genhtml_function_coverage=1 00:15:52.424 --rc genhtml_legend=1 00:15:52.424 --rc geninfo_all_blocks=1 00:15:52.424 --rc geninfo_unexecuted_blocks=1 00:15:52.424 00:15:52.424 ' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:52.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.424 --rc genhtml_branch_coverage=1 00:15:52.424 --rc genhtml_function_coverage=1 00:15:52.424 --rc genhtml_legend=1 00:15:52.424 --rc geninfo_all_blocks=1 00:15:52.424 --rc geninfo_unexecuted_blocks=1 00:15:52.424 00:15:52.424 ' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:52.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.424 --rc genhtml_branch_coverage=1 00:15:52.424 --rc genhtml_function_coverage=1 00:15:52.424 --rc genhtml_legend=1 00:15:52.424 --rc geninfo_all_blocks=1 00:15:52.424 --rc geninfo_unexecuted_blocks=1 00:15:52.424 00:15:52.424 ' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84655 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84655 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84655 ']' 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.424 06:08:17 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 [2024-10-01 06:08:17.643198] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:52.424 [2024-10-01 06:08:17.643525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84655 ] 00:15:52.424 [2024-10-01 06:08:17.780613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.424 [2024-10-01 06:08:17.824653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.990 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.990 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:15:52.990 06:08:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:52.990 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:52.990 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:52.990 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:52.990 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:52.990 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:53.249 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:53.249 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:53.249 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:53.249 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:53.249 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:53.249 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:53.249 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:53.249 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:53.507 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:53.507 { 00:15:53.507 "name": "nvme0n1", 00:15:53.507 "aliases": [ 00:15:53.508 "5616020f-fd87-414f-b69c-91e59e8c48b7" 00:15:53.508 ], 00:15:53.508 "product_name": "NVMe disk", 00:15:53.508 "block_size": 4096, 00:15:53.508 "num_blocks": 1310720, 00:15:53.508 "uuid": "5616020f-fd87-414f-b69c-91e59e8c48b7", 00:15:53.508 "numa_id": -1, 00:15:53.508 "assigned_rate_limits": { 00:15:53.508 "rw_ios_per_sec": 0, 00:15:53.508 "rw_mbytes_per_sec": 0, 00:15:53.508 "r_mbytes_per_sec": 0, 00:15:53.508 "w_mbytes_per_sec": 0 00:15:53.508 }, 00:15:53.508 "claimed": true, 00:15:53.508 "claim_type": "read_many_write_one", 00:15:53.508 "zoned": false, 00:15:53.508 "supported_io_types": { 00:15:53.508 "read": true, 00:15:53.508 "write": true, 00:15:53.508 "unmap": true, 00:15:53.508 "flush": true, 00:15:53.508 "reset": true, 00:15:53.508 "nvme_admin": true, 00:15:53.508 "nvme_io": true, 00:15:53.508 "nvme_io_md": false, 00:15:53.508 "write_zeroes": true, 00:15:53.508 "zcopy": false, 00:15:53.508 "get_zone_info": false, 00:15:53.508 "zone_management": false, 00:15:53.508 "zone_append": false, 00:15:53.508 "compare": true, 00:15:53.508 "compare_and_write": false, 00:15:53.508 "abort": true, 00:15:53.508 "seek_hole": false, 00:15:53.508 "seek_data": false, 00:15:53.508 "copy": true, 00:15:53.508 "nvme_iov_md": false 00:15:53.508 }, 00:15:53.508 "driver_specific": { 00:15:53.508 "nvme": [ 00:15:53.508 { 00:15:53.508 "pci_address": "0000:00:11.0", 00:15:53.508 "trid": { 00:15:53.508 "trtype": "PCIe", 00:15:53.508 "traddr": "0000:00:11.0" 00:15:53.508 }, 00:15:53.508 "ctrlr_data": { 00:15:53.508 "cntlid": 0, 00:15:53.508 "vendor_id": "0x1b36", 00:15:53.508 "model_number": "QEMU NVMe Ctrl", 00:15:53.508 "serial_number": "12341", 00:15:53.508 "firmware_revision": "8.0.0", 00:15:53.508 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:53.508 "oacs": { 00:15:53.508 "security": 0, 00:15:53.508 "format": 1, 00:15:53.508 "firmware": 0, 00:15:53.508 "ns_manage": 1 00:15:53.508 }, 00:15:53.508 "multi_ctrlr": false, 00:15:53.508 "ana_reporting": false 00:15:53.508 }, 00:15:53.508 "vs": { 00:15:53.508 "nvme_version": "1.4" 00:15:53.508 }, 00:15:53.508 "ns_data": { 00:15:53.508 "id": 1, 00:15:53.508 "can_share": false 00:15:53.508 } 00:15:53.508 } 00:15:53.508 ], 00:15:53.508 "mp_policy": "active_passive" 00:15:53.508 } 00:15:53.508 } 00:15:53.508 ]' 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:53.508 06:08:18 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:53.766 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=fc2ef154-570a-44ee-b19c-a599277bbe62 00:15:53.766 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:53.766 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc2ef154-570a-44ee-b19c-a599277bbe62 00:15:54.025 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:54.025 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=850aaecd-4a4c-4f12-a388-cd0a8e3e38da 00:15:54.025 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 850aaecd-4a4c-4f12-a388-cd0a8e3e38da 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:54.283 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:54.284 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:54.542 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:54.542 { 00:15:54.542 "name": "8a50539e-ed58-43c4-82df-ac8b7b997694", 00:15:54.542 "aliases": [ 00:15:54.542 "lvs/nvme0n1p0" 00:15:54.542 ], 00:15:54.542 "product_name": "Logical Volume", 00:15:54.542 "block_size": 4096, 00:15:54.542 "num_blocks": 26476544, 00:15:54.542 "uuid": "8a50539e-ed58-43c4-82df-ac8b7b997694", 00:15:54.542 "assigned_rate_limits": { 00:15:54.542 "rw_ios_per_sec": 0, 00:15:54.542 "rw_mbytes_per_sec": 0, 00:15:54.542 "r_mbytes_per_sec": 0, 00:15:54.542 "w_mbytes_per_sec": 0 00:15:54.542 }, 00:15:54.542 "claimed": false, 00:15:54.542 "zoned": false, 00:15:54.542 "supported_io_types": { 00:15:54.542 "read": true, 00:15:54.542 "write": true, 00:15:54.542 "unmap": true, 00:15:54.542 "flush": false, 00:15:54.542 "reset": true, 00:15:54.542 "nvme_admin": false, 00:15:54.542 "nvme_io": false, 00:15:54.542 "nvme_io_md": false, 00:15:54.542 "write_zeroes": true, 00:15:54.542 "zcopy": false, 00:15:54.542 "get_zone_info": false, 00:15:54.542 "zone_management": false, 00:15:54.542 "zone_append": false, 00:15:54.542 "compare": false, 00:15:54.542 "compare_and_write": false, 00:15:54.542 "abort": false, 00:15:54.542 "seek_hole": true, 00:15:54.542 "seek_data": true, 00:15:54.542 "copy": false, 00:15:54.542 "nvme_iov_md": false 00:15:54.542 }, 00:15:54.542 "driver_specific": { 00:15:54.542 "lvol": { 00:15:54.542 "lvol_store_uuid": "850aaecd-4a4c-4f12-a388-cd0a8e3e38da", 00:15:54.542 "base_bdev": "nvme0n1", 00:15:54.542 "thin_provision": true, 00:15:54.542 "num_allocated_clusters": 0, 00:15:54.542 "snapshot": false, 00:15:54.542 "clone": false, 00:15:54.542 "esnap_clone": false 00:15:54.542 } 00:15:54.542 } 00:15:54.542 } 00:15:54.542 ]' 00:15:54.542 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:54.542 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:54.542 06:08:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:54.542 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:54.542 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:54.542 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:54.542 06:08:20 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:54.542 06:08:20 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:54.542 06:08:20 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:54.800 06:08:20 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:54.800 06:08:20 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:54.800 06:08:20 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:54.800 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:54.800 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:54.800 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:54.800 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:54.800 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:55.059 { 00:15:55.059 "name": "8a50539e-ed58-43c4-82df-ac8b7b997694", 00:15:55.059 "aliases": [ 00:15:55.059 "lvs/nvme0n1p0" 00:15:55.059 ], 00:15:55.059 "product_name": "Logical Volume", 00:15:55.059 "block_size": 4096, 00:15:55.059 "num_blocks": 26476544, 00:15:55.059 "uuid": "8a50539e-ed58-43c4-82df-ac8b7b997694", 00:15:55.059 "assigned_rate_limits": { 00:15:55.059 "rw_ios_per_sec": 0, 00:15:55.059 "rw_mbytes_per_sec": 0, 00:15:55.059 "r_mbytes_per_sec": 0, 00:15:55.059 "w_mbytes_per_sec": 0 00:15:55.059 }, 00:15:55.059 "claimed": false, 00:15:55.059 "zoned": false, 00:15:55.059 "supported_io_types": { 00:15:55.059 "read": true, 00:15:55.059 "write": true, 00:15:55.059 "unmap": true, 00:15:55.059 "flush": false, 00:15:55.059 "reset": true, 00:15:55.059 "nvme_admin": false, 00:15:55.059 "nvme_io": false, 00:15:55.059 "nvme_io_md": false, 00:15:55.059 "write_zeroes": true, 00:15:55.059 "zcopy": false, 00:15:55.059 "get_zone_info": false, 00:15:55.059 "zone_management": false, 00:15:55.059 "zone_append": false, 00:15:55.059 "compare": false, 00:15:55.059 "compare_and_write": false, 00:15:55.059 "abort": false, 00:15:55.059 "seek_hole": true, 00:15:55.059 "seek_data": true, 00:15:55.059 "copy": false, 00:15:55.059 "nvme_iov_md": false 00:15:55.059 }, 00:15:55.059 "driver_specific": { 00:15:55.059 "lvol": { 00:15:55.059 "lvol_store_uuid": "850aaecd-4a4c-4f12-a388-cd0a8e3e38da", 00:15:55.059 "base_bdev": "nvme0n1", 00:15:55.059 "thin_provision": true, 00:15:55.059 "num_allocated_clusters": 0, 00:15:55.059 "snapshot": false, 00:15:55.059 "clone": false, 00:15:55.059 "esnap_clone": false 00:15:55.059 } 00:15:55.059 } 00:15:55.059 } 00:15:55.059 ]' 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:55.059 06:08:20 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:55.317 06:08:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:55.317 06:08:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:55.317 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:55.317 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:55.317 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:55.317 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:55.317 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a50539e-ed58-43c4-82df-ac8b7b997694 00:15:55.576 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:55.576 { 00:15:55.576 "name": "8a50539e-ed58-43c4-82df-ac8b7b997694", 00:15:55.576 "aliases": [ 00:15:55.576 "lvs/nvme0n1p0" 00:15:55.576 ], 00:15:55.576 "product_name": "Logical Volume", 00:15:55.576 "block_size": 4096, 00:15:55.576 "num_blocks": 26476544, 00:15:55.576 "uuid": "8a50539e-ed58-43c4-82df-ac8b7b997694", 00:15:55.576 "assigned_rate_limits": { 00:15:55.576 "rw_ios_per_sec": 0, 00:15:55.576 "rw_mbytes_per_sec": 0, 00:15:55.576 "r_mbytes_per_sec": 0, 00:15:55.576 "w_mbytes_per_sec": 0 00:15:55.576 }, 00:15:55.576 "claimed": false, 00:15:55.576 "zoned": false, 00:15:55.576 "supported_io_types": { 00:15:55.576 "read": true, 00:15:55.576 "write": true, 00:15:55.576 "unmap": true, 00:15:55.576 "flush": false, 00:15:55.576 "reset": true, 00:15:55.576 "nvme_admin": false, 00:15:55.576 "nvme_io": false, 00:15:55.576 "nvme_io_md": false, 00:15:55.576 "write_zeroes": true, 00:15:55.576 "zcopy": false, 00:15:55.576 "get_zone_info": false, 00:15:55.576 "zone_management": false, 00:15:55.576 "zone_append": false, 00:15:55.576 "compare": false, 00:15:55.576 "compare_and_write": false, 00:15:55.576 "abort": false, 00:15:55.576 "seek_hole": true, 00:15:55.576 "seek_data": true, 00:15:55.576 "copy": false, 00:15:55.576 "nvme_iov_md": false 00:15:55.576 }, 00:15:55.576 "driver_specific": { 00:15:55.576 "lvol": { 00:15:55.576 "lvol_store_uuid": "850aaecd-4a4c-4f12-a388-cd0a8e3e38da", 00:15:55.576 "base_bdev": "nvme0n1", 00:15:55.576 "thin_provision": true, 00:15:55.576 "num_allocated_clusters": 0, 00:15:55.576 "snapshot": false, 00:15:55.576 "clone": false, 00:15:55.576 "esnap_clone": false 00:15:55.576 } 00:15:55.576 } 00:15:55.576 } 00:15:55.576 ]' 00:15:55.576 06:08:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:55.576 06:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:55.576 06:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:55.576 06:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:55.576 06:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:55.576 06:08:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:55.576 06:08:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:55.576 06:08:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8a50539e-ed58-43c4-82df-ac8b7b997694 -c nvc0n1p0 --l2p_dram_limit 20 00:15:55.836 [2024-10-01 06:08:21.191906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.191971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:55.836 [2024-10-01 06:08:21.191986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:55.836 [2024-10-01 06:08:21.191994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.192050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.192058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:55.836 [2024-10-01 06:08:21.192068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:15:55.836 [2024-10-01 06:08:21.192074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.192090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:55.836 [2024-10-01 06:08:21.192349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:55.836 [2024-10-01 06:08:21.192362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.192369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:55.836 [2024-10-01 06:08:21.192377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:15:55.836 [2024-10-01 06:08:21.192384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.192412] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dfb76f51-0cb0-4441-ae14-0166bf137723 00:15:55.836 [2024-10-01 06:08:21.193758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.193938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:55.836 [2024-10-01 06:08:21.193955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:55.836 [2024-10-01 06:08:21.193966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.201021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.201138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:55.836 [2024-10-01 06:08:21.201152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.989 ms 00:15:55.836 [2024-10-01 06:08:21.201162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.201238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.201247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:55.836 [2024-10-01 06:08:21.201254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:15:55.836 [2024-10-01 06:08:21.201262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.201314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.201327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:55.836 [2024-10-01 06:08:21.201334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:55.836 [2024-10-01 06:08:21.201341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.201359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:55.836 [2024-10-01 06:08:21.203056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.203081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:55.836 [2024-10-01 06:08:21.203091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:15:55.836 [2024-10-01 06:08:21.203098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.203134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.203144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:55.836 [2024-10-01 06:08:21.203154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:55.836 [2024-10-01 06:08:21.203160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.203174] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:55.836 [2024-10-01 06:08:21.203454] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:55.836 [2024-10-01 06:08:21.203471] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:55.836 [2024-10-01 06:08:21.203484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:55.836 [2024-10-01 06:08:21.203495] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203502] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203510] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:55.836 [2024-10-01 06:08:21.203516] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:55.836 [2024-10-01 06:08:21.203523] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:55.836 [2024-10-01 06:08:21.203529] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:55.836 [2024-10-01 06:08:21.203537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.203543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:55.836 [2024-10-01 06:08:21.203555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:15:55.836 [2024-10-01 06:08:21.203561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.203628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.836 [2024-10-01 06:08:21.203639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:55.836 [2024-10-01 06:08:21.203647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:55.836 [2024-10-01 06:08:21.203655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.836 [2024-10-01 06:08:21.203730] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:55.836 [2024-10-01 06:08:21.203738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:55.836 [2024-10-01 06:08:21.203747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:55.836 [2024-10-01 06:08:21.203772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:55.836 [2024-10-01 06:08:21.203795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:55.836 [2024-10-01 06:08:21.203809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:55.836 [2024-10-01 06:08:21.203815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:55.836 [2024-10-01 06:08:21.203824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:55.836 [2024-10-01 06:08:21.203830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:55.836 [2024-10-01 06:08:21.203838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:55.836 [2024-10-01 06:08:21.203859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:55.836 [2024-10-01 06:08:21.203875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:55.836 [2024-10-01 06:08:21.203896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:55.836 [2024-10-01 06:08:21.203916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:55.836 [2024-10-01 06:08:21.203936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:55.836 [2024-10-01 06:08:21.203959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:55.836 [2024-10-01 06:08:21.203976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:55.836 [2024-10-01 06:08:21.203984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:55.836 [2024-10-01 06:08:21.203990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:55.836 [2024-10-01 06:08:21.203998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:55.836 [2024-10-01 06:08:21.204003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:55.837 [2024-10-01 06:08:21.204011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:55.837 [2024-10-01 06:08:21.204016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:55.837 [2024-10-01 06:08:21.204024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:55.837 [2024-10-01 06:08:21.204031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.837 [2024-10-01 06:08:21.204038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:55.837 [2024-10-01 06:08:21.204044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:55.837 [2024-10-01 06:08:21.204052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.837 [2024-10-01 06:08:21.204057] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:55.837 [2024-10-01 06:08:21.204067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:55.837 [2024-10-01 06:08:21.204078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:55.837 [2024-10-01 06:08:21.204086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.837 [2024-10-01 06:08:21.204092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:55.837 [2024-10-01 06:08:21.204099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:55.837 [2024-10-01 06:08:21.204105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:55.837 [2024-10-01 06:08:21.204112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:55.837 [2024-10-01 06:08:21.204118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:55.837 [2024-10-01 06:08:21.204124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:55.837 [2024-10-01 06:08:21.204133] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:55.837 [2024-10-01 06:08:21.204143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:55.837 [2024-10-01 06:08:21.204149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:55.837 [2024-10-01 06:08:21.204156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:55.837 [2024-10-01 06:08:21.204163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:55.837 [2024-10-01 06:08:21.204170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:55.837 [2024-10-01 06:08:21.204175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:55.837 [2024-10-01 06:08:21.204183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:55.837 [2024-10-01 06:08:21.204189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:55.837 [2024-10-01 06:08:21.204200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:55.837 [2024-10-01 06:08:21.204205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:55.837 [2024-10-01 06:08:21.204212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:55.837 [2024-10-01 06:08:21.204218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:55.837 [2024-10-01 06:08:21.204224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:55.837 [2024-10-01 06:08:21.204229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:55.837 [2024-10-01 06:08:21.204237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:55.837 [2024-10-01 06:08:21.204242] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:55.837 [2024-10-01 06:08:21.204249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:55.837 [2024-10-01 06:08:21.204255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:55.837 [2024-10-01 06:08:21.204262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:55.837 [2024-10-01 06:08:21.204268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:55.837 [2024-10-01 06:08:21.204275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:55.837 [2024-10-01 06:08:21.204281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.837 [2024-10-01 06:08:21.204290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:55.837 [2024-10-01 06:08:21.204300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:15:55.837 [2024-10-01 06:08:21.204313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.837 [2024-10-01 06:08:21.204340] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:55.837 [2024-10-01 06:08:21.204349] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:57.735 [2024-10-01 06:08:23.272555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.272635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:57.735 [2024-10-01 06:08:23.272651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2068.202 ms 00:15:57.735 [2024-10-01 06:08:23.272664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.735 [2024-10-01 06:08:23.291800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.292131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:57.735 [2024-10-01 06:08:23.292161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.038 ms 00:15:57.735 [2024-10-01 06:08:23.292182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.735 [2024-10-01 06:08:23.292373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.292392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:57.735 [2024-10-01 06:08:23.292406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:15:57.735 [2024-10-01 06:08:23.292421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.735 [2024-10-01 06:08:23.303791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.303952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:57.735 [2024-10-01 06:08:23.303969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.271 ms 00:15:57.735 [2024-10-01 06:08:23.303980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.735 [2024-10-01 06:08:23.304013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.304029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:57.735 [2024-10-01 06:08:23.304038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:57.735 [2024-10-01 06:08:23.304047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.735 [2024-10-01 06:08:23.304475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.304495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:57.735 [2024-10-01 06:08:23.304505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:15:57.735 [2024-10-01 06:08:23.304518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.735 [2024-10-01 06:08:23.304639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.304651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:57.735 [2024-10-01 06:08:23.304661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:15:57.735 [2024-10-01 06:08:23.304675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.735 [2024-10-01 06:08:23.310355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.310389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:57.735 [2024-10-01 06:08:23.310399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.656 ms 00:15:57.735 [2024-10-01 06:08:23.310408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.735 [2024-10-01 06:08:23.319467] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:57.735 [2024-10-01 06:08:23.325585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.735 [2024-10-01 06:08:23.325616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:57.735 [2024-10-01 06:08:23.325630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.109 ms 00:15:57.735 [2024-10-01 06:08:23.325643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.376876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.376938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:57.993 [2024-10-01 06:08:23.376960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.196 ms 00:15:57.993 [2024-10-01 06:08:23.376969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.377161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.377172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:57.993 [2024-10-01 06:08:23.377193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:15:57.993 [2024-10-01 06:08:23.377211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.380657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.380702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:57.993 [2024-10-01 06:08:23.380718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.407 ms 00:15:57.993 [2024-10-01 06:08:23.380726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.383362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.383395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:57.993 [2024-10-01 06:08:23.383407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:15:57.993 [2024-10-01 06:08:23.383414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.383732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.383747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:57.993 [2024-10-01 06:08:23.383764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:15:57.993 [2024-10-01 06:08:23.383771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.411954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.411999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:57.993 [2024-10-01 06:08:23.412015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.161 ms 00:15:57.993 [2024-10-01 06:08:23.412023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.416215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.416252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:57.993 [2024-10-01 06:08:23.416268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.134 ms 00:15:57.993 [2024-10-01 06:08:23.416276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.419775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.419812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:57.993 [2024-10-01 06:08:23.419824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.463 ms 00:15:57.993 [2024-10-01 06:08:23.419831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.423490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.423527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:57.993 [2024-10-01 06:08:23.423541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:15:57.993 [2024-10-01 06:08:23.423549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.423591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.423601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:57.993 [2024-10-01 06:08:23.423615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:57.993 [2024-10-01 06:08:23.423623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.423699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.993 [2024-10-01 06:08:23.423709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:57.993 [2024-10-01 06:08:23.423719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:57.993 [2024-10-01 06:08:23.423727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.993 [2024-10-01 06:08:23.424716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2232.371 ms, result 0 00:15:57.993 { 00:15:57.993 "name": "ftl0", 00:15:57.993 "uuid": "dfb76f51-0cb0-4441-ae14-0166bf137723" 00:15:57.993 } 00:15:57.993 06:08:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:57.993 06:08:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:15:57.993 06:08:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:15:58.250 06:08:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:58.250 [2024-10-01 06:08:23.733638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:58.250 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:58.250 Zero copy mechanism will not be used. 00:15:58.250 Running I/O for 4 seconds... 00:16:02.482 2962.00 IOPS, 196.70 MiB/s 2969.00 IOPS, 197.16 MiB/s 2939.00 IOPS, 195.17 MiB/s 2935.25 IOPS, 194.92 MiB/s 00:16:02.482 Latency(us) 00:16:02.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.482 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:02.482 ftl0 : 4.00 2934.05 194.84 0.00 0.00 359.58 176.44 2167.73 00:16:02.482 =================================================================================================================== 00:16:02.482 Total : 2934.05 194.84 0.00 0.00 359.58 176.44 2167.73 00:16:02.482 [2024-10-01 06:08:27.742584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:02.482 { 00:16:02.482 "results": [ 00:16:02.482 { 00:16:02.482 "job": "ftl0", 00:16:02.482 "core_mask": "0x1", 00:16:02.482 "workload": "randwrite", 00:16:02.482 "status": "finished", 00:16:02.482 "queue_depth": 1, 00:16:02.482 "io_size": 69632, 00:16:02.482 "runtime": 4.001972, 00:16:02.482 "iops": 2934.053511618772, 00:16:02.482 "mibps": 194.8394910059341, 00:16:02.482 "io_failed": 0, 00:16:02.482 "io_timeout": 0, 00:16:02.482 "avg_latency_us": 359.57962658700524, 00:16:02.482 "min_latency_us": 176.44307692307692, 00:16:02.482 "max_latency_us": 2167.729230769231 00:16:02.482 } 00:16:02.482 ], 00:16:02.482 "core_count": 1 00:16:02.482 } 00:16:02.482 06:08:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:02.482 [2024-10-01 06:08:27.839377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:02.482 Running I/O for 4 seconds... 00:16:06.662 10866.00 IOPS, 42.45 MiB/s 10622.00 IOPS, 41.49 MiB/s 10499.00 IOPS, 41.01 MiB/s 10458.50 IOPS, 40.85 MiB/s 00:16:06.662 Latency(us) 00:16:06.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.662 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.662 ftl0 : 4.02 10437.84 40.77 0.00 0.00 12232.45 247.34 32263.88 00:16:06.662 =================================================================================================================== 00:16:06.662 Total : 10437.84 40.77 0.00 0.00 12232.45 0.00 32263.88 00:16:06.662 [2024-10-01 06:08:31.867054] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:06.662 { 00:16:06.662 "results": [ 00:16:06.662 { 00:16:06.662 "job": "ftl0", 00:16:06.662 "core_mask": "0x1", 00:16:06.662 "workload": "randwrite", 00:16:06.662 "status": "finished", 00:16:06.662 "queue_depth": 128, 00:16:06.662 "io_size": 4096, 00:16:06.662 "runtime": 4.020179, 00:16:06.662 "iops": 10437.843688054687, 00:16:06.662 "mibps": 40.77282690646362, 00:16:06.662 "io_failed": 0, 00:16:06.662 "io_timeout": 0, 00:16:06.662 "avg_latency_us": 12232.453557321092, 00:16:06.662 "min_latency_us": 247.3353846153846, 00:16:06.662 "max_latency_us": 32263.876923076925 00:16:06.662 } 00:16:06.662 ], 00:16:06.662 "core_count": 1 00:16:06.662 } 00:16:06.662 06:08:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:06.662 [2024-10-01 06:08:31.970091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:06.662 Running I/O for 4 seconds... 00:16:10.400 8344.00 IOPS, 32.59 MiB/s 8531.00 IOPS, 33.32 MiB/s 8578.00 IOPS, 33.51 MiB/s 8607.50 IOPS, 33.62 MiB/s 00:16:10.400 Latency(us) 00:16:10.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.400 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:10.400 Verification LBA range: start 0x0 length 0x1400000 00:16:10.400 ftl0 : 4.01 8619.57 33.67 0.00 0.00 14802.84 274.12 25811.10 00:16:10.400 =================================================================================================================== 00:16:10.400 Total : 8619.57 33.67 0.00 0.00 14802.84 0.00 25811.10 00:16:10.400 [2024-10-01 06:08:35.986893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:16:10.400 "results": [ 00:16:10.400 { 00:16:10.400 "job": "ftl0", 00:16:10.400 "core_mask": "0x1", 00:16:10.400 "workload": "verify", 00:16:10.400 "status": "finished", 00:16:10.400 "verify_range": { 00:16:10.400 "start": 0, 00:16:10.400 "length": 20971520 00:16:10.400 }, 00:16:10.400 "queue_depth": 128, 00:16:10.400 "io_size": 4096, 00:16:10.400 "runtime": 4.009132, 00:16:10.400 "iops": 8619.571518223895, 00:16:10.400 "mibps": 33.67020124306209, 00:16:10.400 "io_failed": 0, 00:16:10.400 "io_timeout": 0, 00:16:10.400 "avg_latency_us": 14802.837355984873, 00:16:10.400 "min_latency_us": 274.11692307692306, 00:16:10.400 "max_latency_us": 25811.10153846154 00:16:10.400 } 00:16:10.400 ], 00:16:10.400 "core_count": 1 00:16:10.400 } 00:16:10.400 l0 00:16:10.400 06:08:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:10.659 [2024-10-01 06:08:36.191298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.659 [2024-10-01 06:08:36.191364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:10.659 [2024-10-01 06:08:36.191381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:10.659 [2024-10-01 06:08:36.191390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.659 [2024-10-01 06:08:36.191414] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:10.659 [2024-10-01 06:08:36.191980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.659 [2024-10-01 06:08:36.192002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:10.659 [2024-10-01 06:08:36.192012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:16:10.659 [2024-10-01 06:08:36.192024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.659 [2024-10-01 06:08:36.194329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.659 [2024-10-01 06:08:36.194524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:10.659 [2024-10-01 06:08:36.194545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.284 ms 00:16:10.659 [2024-10-01 06:08:36.194558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.338880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.338941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:10.919 [2024-10-01 06:08:36.338955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 144.301 ms 00:16:10.919 [2024-10-01 06:08:36.338965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.345156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.345338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:10.919 [2024-10-01 06:08:36.345353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.159 ms 00:16:10.919 [2024-10-01 06:08:36.345363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.346709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.346747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:10.919 [2024-10-01 06:08:36.346756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:16:10.919 [2024-10-01 06:08:36.346767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.351247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.351283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:10.919 [2024-10-01 06:08:36.351293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:16:10.919 [2024-10-01 06:08:36.351310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.351419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.351442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:10.919 [2024-10-01 06:08:36.351451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:10.919 [2024-10-01 06:08:36.351461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.353546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.353580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:10.919 [2024-10-01 06:08:36.353590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:16:10.919 [2024-10-01 06:08:36.353601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.355084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.355116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:10.919 [2024-10-01 06:08:36.355125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:16:10.919 [2024-10-01 06:08:36.355135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.356206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.356359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:10.919 [2024-10-01 06:08:36.356372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:16:10.919 [2024-10-01 06:08:36.356384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.357549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.919 [2024-10-01 06:08:36.357583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:10.919 [2024-10-01 06:08:36.357592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.118 ms 00:16:10.919 [2024-10-01 06:08:36.357602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.919 [2024-10-01 06:08:36.357629] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:10.919 [2024-10-01 06:08:36.357646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.357992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:10.919 [2024-10-01 06:08:36.358163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:10.920 [2024-10-01 06:08:36.358549] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:10.920 [2024-10-01 06:08:36.358562] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dfb76f51-0cb0-4441-ae14-0166bf137723 00:16:10.920 [2024-10-01 06:08:36.358572] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:10.920 [2024-10-01 06:08:36.358588] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:10.920 [2024-10-01 06:08:36.358597] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:10.920 [2024-10-01 06:08:36.358604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:10.920 [2024-10-01 06:08:36.358615] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:10.920 [2024-10-01 06:08:36.358623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:10.920 [2024-10-01 06:08:36.358640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:10.920 [2024-10-01 06:08:36.358646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:10.920 [2024-10-01 06:08:36.358654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:10.920 [2024-10-01 06:08:36.358662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.920 [2024-10-01 06:08:36.358671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:10.920 [2024-10-01 06:08:36.358680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:16:10.920 [2024-10-01 06:08:36.358689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.360712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.920 [2024-10-01 06:08:36.360888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:10.920 [2024-10-01 06:08:36.360942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:16:10.920 [2024-10-01 06:08:36.360968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.361110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.920 [2024-10-01 06:08:36.361176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:10.920 [2024-10-01 06:08:36.361233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:10.920 [2024-10-01 06:08:36.361295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.366883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.366997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:10.920 [2024-10-01 06:08:36.367048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.920 [2024-10-01 06:08:36.367128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.367199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.367264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:10.920 [2024-10-01 06:08:36.367288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.920 [2024-10-01 06:08:36.367309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.367468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.367537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:10.920 [2024-10-01 06:08:36.367585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.920 [2024-10-01 06:08:36.367609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.367667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.367694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:10.920 [2024-10-01 06:08:36.367714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.920 [2024-10-01 06:08:36.367769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.378958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.379108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:10.920 [2024-10-01 06:08:36.379162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.920 [2024-10-01 06:08:36.379216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.388734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.388987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:10.920 [2024-10-01 06:08:36.389045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.920 [2024-10-01 06:08:36.389075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.389190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.389383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:10.920 [2024-10-01 06:08:36.389430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.920 [2024-10-01 06:08:36.389455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.389519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.389545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:10.920 [2024-10-01 06:08:36.389600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.920 [2024-10-01 06:08:36.389631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.920 [2024-10-01 06:08:36.389719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.920 [2024-10-01 06:08:36.389746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:10.921 [2024-10-01 06:08:36.389857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.921 [2024-10-01 06:08:36.389883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.921 [2024-10-01 06:08:36.389932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.921 [2024-10-01 06:08:36.390023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:10.921 [2024-10-01 06:08:36.390048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.921 [2024-10-01 06:08:36.390069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.921 [2024-10-01 06:08:36.390160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.921 [2024-10-01 06:08:36.390189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:10.921 [2024-10-01 06:08:36.390222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.921 [2024-10-01 06:08:36.390281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.921 [2024-10-01 06:08:36.390341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.921 [2024-10-01 06:08:36.390401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:10.921 [2024-10-01 06:08:36.390424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.921 [2024-10-01 06:08:36.390552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.921 [2024-10-01 06:08:36.390690] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 199.357 ms, result 0 00:16:10.921 true 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84655 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84655 ']' 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84655 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84655 00:16:10.921 killing process with pid 84655 00:16:10.921 Received shutdown signal, test time was about 4.000000 seconds 00:16:10.921 00:16:10.921 Latency(us) 00:16:10.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.921 =================================================================================================================== 00:16:10.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84655' 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84655 00:16:10.921 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84655 00:16:11.179 Remove shared memory files 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:11.179 ************************************ 00:16:11.179 END TEST ftl_bdevperf 00:16:11.179 ************************************ 00:16:11.179 00:16:11.179 real 0m19.334s 00:16:11.179 user 0m21.904s 00:16:11.179 sys 0m0.829s 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:11.179 06:08:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:11.179 06:08:36 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:11.179 06:08:36 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:11.179 06:08:36 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:11.179 06:08:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:11.179 ************************************ 00:16:11.179 START TEST ftl_trim 00:16:11.179 ************************************ 00:16:11.179 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:11.438 * Looking for test storage... 00:16:11.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.438 06:08:36 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.438 --rc genhtml_branch_coverage=1 00:16:11.438 --rc genhtml_function_coverage=1 00:16:11.438 --rc genhtml_legend=1 00:16:11.438 --rc geninfo_all_blocks=1 00:16:11.438 --rc geninfo_unexecuted_blocks=1 00:16:11.438 00:16:11.438 ' 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.438 --rc genhtml_branch_coverage=1 00:16:11.438 --rc genhtml_function_coverage=1 00:16:11.438 --rc genhtml_legend=1 00:16:11.438 --rc geninfo_all_blocks=1 00:16:11.438 --rc geninfo_unexecuted_blocks=1 00:16:11.438 00:16:11.438 ' 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.438 --rc genhtml_branch_coverage=1 00:16:11.438 --rc genhtml_function_coverage=1 00:16:11.438 --rc genhtml_legend=1 00:16:11.438 --rc geninfo_all_blocks=1 00:16:11.438 --rc geninfo_unexecuted_blocks=1 00:16:11.438 00:16:11.438 ' 00:16:11.438 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.438 --rc genhtml_branch_coverage=1 00:16:11.438 --rc genhtml_function_coverage=1 00:16:11.438 --rc genhtml_legend=1 00:16:11.438 --rc geninfo_all_blocks=1 00:16:11.438 --rc geninfo_unexecuted_blocks=1 00:16:11.438 00:16:11.438 ' 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:11.438 06:08:36 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:11.439 06:08:36 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:11.439 06:08:36 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:11.439 06:08:36 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:11.439 06:08:36 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=84974 00:16:11.439 06:08:36 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 84974 00:16:11.439 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 84974 ']' 00:16:11.439 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.439 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.439 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.439 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.439 06:08:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:11.439 06:08:36 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:11.439 [2024-10-01 06:08:37.022342] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:11.439 [2024-10-01 06:08:37.022465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84974 ] 00:16:11.696 [2024-10-01 06:08:37.153737] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.696 [2024-10-01 06:08:37.195986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.696 [2024-10-01 06:08:37.196256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.696 [2024-10-01 06:08:37.196279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.262 06:08:37 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.262 06:08:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:12.262 06:08:37 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:12.262 06:08:37 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:12.262 06:08:37 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:12.262 06:08:37 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:12.262 06:08:37 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:12.262 06:08:37 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:12.520 06:08:38 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:12.520 06:08:38 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:12.520 06:08:38 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:12.520 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:12.520 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:12.520 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:12.520 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:12.779 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:12.779 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:12.779 { 00:16:12.779 "name": "nvme0n1", 00:16:12.779 "aliases": [ 00:16:12.779 "0f8d19b5-adec-4d57-8699-c142de345815" 00:16:12.779 ], 00:16:12.779 "product_name": "NVMe disk", 00:16:12.779 "block_size": 4096, 00:16:12.779 "num_blocks": 1310720, 00:16:12.779 "uuid": "0f8d19b5-adec-4d57-8699-c142de345815", 00:16:12.779 "numa_id": -1, 00:16:12.779 "assigned_rate_limits": { 00:16:12.779 "rw_ios_per_sec": 0, 00:16:12.779 "rw_mbytes_per_sec": 0, 00:16:12.779 "r_mbytes_per_sec": 0, 00:16:12.779 "w_mbytes_per_sec": 0 00:16:12.779 }, 00:16:12.779 "claimed": true, 00:16:12.779 "claim_type": "read_many_write_one", 00:16:12.779 "zoned": false, 00:16:12.779 "supported_io_types": { 00:16:12.779 "read": true, 00:16:12.779 "write": true, 00:16:12.779 "unmap": true, 00:16:12.779 "flush": true, 00:16:12.779 "reset": true, 00:16:12.779 "nvme_admin": true, 00:16:12.779 "nvme_io": true, 00:16:12.779 "nvme_io_md": false, 00:16:12.779 "write_zeroes": true, 00:16:12.779 "zcopy": false, 00:16:12.779 "get_zone_info": false, 00:16:12.779 "zone_management": false, 00:16:12.779 "zone_append": false, 00:16:12.779 "compare": true, 00:16:12.779 "compare_and_write": false, 00:16:12.779 "abort": true, 00:16:12.779 "seek_hole": false, 00:16:12.779 "seek_data": false, 00:16:12.779 "copy": true, 00:16:12.779 "nvme_iov_md": false 00:16:12.779 }, 00:16:12.779 "driver_specific": { 00:16:12.779 "nvme": [ 00:16:12.779 { 00:16:12.779 "pci_address": "0000:00:11.0", 00:16:12.779 "trid": { 00:16:12.779 "trtype": "PCIe", 00:16:12.779 "traddr": "0000:00:11.0" 00:16:12.779 }, 00:16:12.779 "ctrlr_data": { 00:16:12.779 "cntlid": 0, 00:16:12.779 "vendor_id": "0x1b36", 00:16:12.779 "model_number": "QEMU NVMe Ctrl", 00:16:12.779 "serial_number": "12341", 00:16:12.779 "firmware_revision": "8.0.0", 00:16:12.779 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:12.779 "oacs": { 00:16:12.779 "security": 0, 00:16:12.779 "format": 1, 00:16:12.779 "firmware": 0, 00:16:12.779 "ns_manage": 1 00:16:12.779 }, 00:16:12.779 "multi_ctrlr": false, 00:16:12.779 "ana_reporting": false 00:16:12.779 }, 00:16:12.779 "vs": { 00:16:12.779 "nvme_version": "1.4" 00:16:12.779 }, 00:16:12.779 "ns_data": { 00:16:12.779 "id": 1, 00:16:12.779 "can_share": false 00:16:12.779 } 00:16:12.779 } 00:16:12.779 ], 00:16:12.779 "mp_policy": "active_passive" 00:16:12.779 } 00:16:12.779 } 00:16:12.779 ]' 00:16:12.779 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:12.779 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:12.779 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:13.036 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:13.036 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:13.036 06:08:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:13.036 06:08:38 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:13.036 06:08:38 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:13.036 06:08:38 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:13.036 06:08:38 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:13.036 06:08:38 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:13.036 06:08:38 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=850aaecd-4a4c-4f12-a388-cd0a8e3e38da 00:16:13.036 06:08:38 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:13.036 06:08:38 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 850aaecd-4a4c-4f12-a388-cd0a8e3e38da 00:16:13.293 06:08:38 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:13.551 06:08:39 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b2e79009-7e63-4ed7-8219-a8281bd8cdf7 00:16:13.551 06:08:39 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b2e79009-7e63-4ed7-8219-a8281bd8cdf7 00:16:13.810 06:08:39 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:13.810 06:08:39 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:13.810 06:08:39 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:13.810 06:08:39 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:13.810 06:08:39 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:13.810 06:08:39 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:13.810 06:08:39 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:13.810 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:13.810 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:13.810 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:13.810 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:13.810 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:14.069 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:14.069 { 00:16:14.069 "name": "0581dc61-7e72-42ef-8f00-47cdb1f936bc", 00:16:14.069 "aliases": [ 00:16:14.069 "lvs/nvme0n1p0" 00:16:14.069 ], 00:16:14.069 "product_name": "Logical Volume", 00:16:14.069 "block_size": 4096, 00:16:14.069 "num_blocks": 26476544, 00:16:14.069 "uuid": "0581dc61-7e72-42ef-8f00-47cdb1f936bc", 00:16:14.069 "assigned_rate_limits": { 00:16:14.069 "rw_ios_per_sec": 0, 00:16:14.069 "rw_mbytes_per_sec": 0, 00:16:14.069 "r_mbytes_per_sec": 0, 00:16:14.069 "w_mbytes_per_sec": 0 00:16:14.069 }, 00:16:14.069 "claimed": false, 00:16:14.069 "zoned": false, 00:16:14.069 "supported_io_types": { 00:16:14.069 "read": true, 00:16:14.069 "write": true, 00:16:14.069 "unmap": true, 00:16:14.069 "flush": false, 00:16:14.069 "reset": true, 00:16:14.069 "nvme_admin": false, 00:16:14.069 "nvme_io": false, 00:16:14.069 "nvme_io_md": false, 00:16:14.069 "write_zeroes": true, 00:16:14.069 "zcopy": false, 00:16:14.069 "get_zone_info": false, 00:16:14.069 "zone_management": false, 00:16:14.069 "zone_append": false, 00:16:14.069 "compare": false, 00:16:14.069 "compare_and_write": false, 00:16:14.069 "abort": false, 00:16:14.069 "seek_hole": true, 00:16:14.069 "seek_data": true, 00:16:14.069 "copy": false, 00:16:14.069 "nvme_iov_md": false 00:16:14.069 }, 00:16:14.069 "driver_specific": { 00:16:14.069 "lvol": { 00:16:14.069 "lvol_store_uuid": "b2e79009-7e63-4ed7-8219-a8281bd8cdf7", 00:16:14.069 "base_bdev": "nvme0n1", 00:16:14.069 "thin_provision": true, 00:16:14.069 "num_allocated_clusters": 0, 00:16:14.069 "snapshot": false, 00:16:14.069 "clone": false, 00:16:14.069 "esnap_clone": false 00:16:14.069 } 00:16:14.069 } 00:16:14.069 } 00:16:14.069 ]' 00:16:14.069 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:14.069 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:14.069 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:14.069 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:14.069 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:14.069 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:14.069 06:08:39 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:14.069 06:08:39 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:14.069 06:08:39 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:14.328 06:08:39 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:14.328 06:08:39 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:14.328 06:08:39 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:14.328 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:14.328 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:14.328 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:14.328 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:14.328 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:14.587 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:14.587 { 00:16:14.587 "name": "0581dc61-7e72-42ef-8f00-47cdb1f936bc", 00:16:14.587 "aliases": [ 00:16:14.587 "lvs/nvme0n1p0" 00:16:14.587 ], 00:16:14.587 "product_name": "Logical Volume", 00:16:14.587 "block_size": 4096, 00:16:14.587 "num_blocks": 26476544, 00:16:14.587 "uuid": "0581dc61-7e72-42ef-8f00-47cdb1f936bc", 00:16:14.587 "assigned_rate_limits": { 00:16:14.587 "rw_ios_per_sec": 0, 00:16:14.587 "rw_mbytes_per_sec": 0, 00:16:14.587 "r_mbytes_per_sec": 0, 00:16:14.587 "w_mbytes_per_sec": 0 00:16:14.587 }, 00:16:14.587 "claimed": false, 00:16:14.587 "zoned": false, 00:16:14.587 "supported_io_types": { 00:16:14.587 "read": true, 00:16:14.587 "write": true, 00:16:14.587 "unmap": true, 00:16:14.587 "flush": false, 00:16:14.587 "reset": true, 00:16:14.587 "nvme_admin": false, 00:16:14.587 "nvme_io": false, 00:16:14.587 "nvme_io_md": false, 00:16:14.587 "write_zeroes": true, 00:16:14.587 "zcopy": false, 00:16:14.587 "get_zone_info": false, 00:16:14.587 "zone_management": false, 00:16:14.587 "zone_append": false, 00:16:14.587 "compare": false, 00:16:14.587 "compare_and_write": false, 00:16:14.587 "abort": false, 00:16:14.587 "seek_hole": true, 00:16:14.587 "seek_data": true, 00:16:14.587 "copy": false, 00:16:14.587 "nvme_iov_md": false 00:16:14.587 }, 00:16:14.587 "driver_specific": { 00:16:14.587 "lvol": { 00:16:14.587 "lvol_store_uuid": "b2e79009-7e63-4ed7-8219-a8281bd8cdf7", 00:16:14.587 "base_bdev": "nvme0n1", 00:16:14.587 "thin_provision": true, 00:16:14.587 "num_allocated_clusters": 0, 00:16:14.587 "snapshot": false, 00:16:14.587 "clone": false, 00:16:14.587 "esnap_clone": false 00:16:14.587 } 00:16:14.587 } 00:16:14.587 } 00:16:14.587 ]' 00:16:14.587 06:08:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:14.587 06:08:40 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:14.587 06:08:40 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:14.587 06:08:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:14.587 06:08:40 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:14.587 06:08:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:14.587 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:14.846 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0581dc61-7e72-42ef-8f00-47cdb1f936bc 00:16:14.846 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:14.846 { 00:16:14.846 "name": "0581dc61-7e72-42ef-8f00-47cdb1f936bc", 00:16:14.846 "aliases": [ 00:16:14.846 "lvs/nvme0n1p0" 00:16:14.846 ], 00:16:14.846 "product_name": "Logical Volume", 00:16:14.846 "block_size": 4096, 00:16:14.846 "num_blocks": 26476544, 00:16:14.846 "uuid": "0581dc61-7e72-42ef-8f00-47cdb1f936bc", 00:16:14.846 "assigned_rate_limits": { 00:16:14.846 "rw_ios_per_sec": 0, 00:16:14.846 "rw_mbytes_per_sec": 0, 00:16:14.846 "r_mbytes_per_sec": 0, 00:16:14.846 "w_mbytes_per_sec": 0 00:16:14.846 }, 00:16:14.846 "claimed": false, 00:16:14.846 "zoned": false, 00:16:14.846 "supported_io_types": { 00:16:14.846 "read": true, 00:16:14.846 "write": true, 00:16:14.846 "unmap": true, 00:16:14.846 "flush": false, 00:16:14.846 "reset": true, 00:16:14.846 "nvme_admin": false, 00:16:14.846 "nvme_io": false, 00:16:14.846 "nvme_io_md": false, 00:16:14.846 "write_zeroes": true, 00:16:14.846 "zcopy": false, 00:16:14.846 "get_zone_info": false, 00:16:14.846 "zone_management": false, 00:16:14.846 "zone_append": false, 00:16:14.846 "compare": false, 00:16:14.846 "compare_and_write": false, 00:16:14.846 "abort": false, 00:16:14.846 "seek_hole": true, 00:16:14.846 "seek_data": true, 00:16:14.846 "copy": false, 00:16:14.846 "nvme_iov_md": false 00:16:14.846 }, 00:16:14.846 "driver_specific": { 00:16:14.846 "lvol": { 00:16:14.846 "lvol_store_uuid": "b2e79009-7e63-4ed7-8219-a8281bd8cdf7", 00:16:14.846 "base_bdev": "nvme0n1", 00:16:14.846 "thin_provision": true, 00:16:14.846 "num_allocated_clusters": 0, 00:16:14.846 "snapshot": false, 00:16:14.846 "clone": false, 00:16:14.846 "esnap_clone": false 00:16:14.846 } 00:16:14.846 } 00:16:14.846 } 00:16:14.846 ]' 00:16:14.846 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:14.846 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:14.846 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:15.105 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:15.105 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:15.105 06:08:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:15.105 06:08:40 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:15.105 06:08:40 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0581dc61-7e72-42ef-8f00-47cdb1f936bc -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:15.105 [2024-10-01 06:08:40.652023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.105 [2024-10-01 06:08:40.652086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:15.105 [2024-10-01 06:08:40.652099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:15.105 [2024-10-01 06:08:40.652108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.105 [2024-10-01 06:08:40.655310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.105 [2024-10-01 06:08:40.655419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:15.105 [2024-10-01 06:08:40.655484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:16:15.105 [2024-10-01 06:08:40.655507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.105 [2024-10-01 06:08:40.655630] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:15.105 [2024-10-01 06:08:40.655917] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:15.105 [2024-10-01 06:08:40.655955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.105 [2024-10-01 06:08:40.655974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:15.106 [2024-10-01 06:08:40.655991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:16:15.106 [2024-10-01 06:08:40.656037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.656171] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f441fd4a-13ea-4411-bf3b-066c792337e7 00:16:15.106 [2024-10-01 06:08:40.657481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.106 [2024-10-01 06:08:40.657564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:15.106 [2024-10-01 06:08:40.657608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:15.106 [2024-10-01 06:08:40.657626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.664463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.106 [2024-10-01 06:08:40.664550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:15.106 [2024-10-01 06:08:40.664645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.749 ms 00:16:15.106 [2024-10-01 06:08:40.664664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.664778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.106 [2024-10-01 06:08:40.664799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:15.106 [2024-10-01 06:08:40.664817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:15.106 [2024-10-01 06:08:40.664910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.664966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.106 [2024-10-01 06:08:40.664985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:15.106 [2024-10-01 06:08:40.665082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:15.106 [2024-10-01 06:08:40.665100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.665138] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:15.106 [2024-10-01 06:08:40.666762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.106 [2024-10-01 06:08:40.666855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:15.106 [2024-10-01 06:08:40.666900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:16:15.106 [2024-10-01 06:08:40.666921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.666989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.106 [2024-10-01 06:08:40.667047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:15.106 [2024-10-01 06:08:40.667065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:15.106 [2024-10-01 06:08:40.667084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.667126] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:15.106 [2024-10-01 06:08:40.667276] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:15.106 [2024-10-01 06:08:40.667338] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:15.106 [2024-10-01 06:08:40.667368] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:15.106 [2024-10-01 06:08:40.667415] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:15.106 [2024-10-01 06:08:40.667444] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:15.106 [2024-10-01 06:08:40.667467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:15.106 [2024-10-01 06:08:40.667508] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:15.106 [2024-10-01 06:08:40.667526] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:15.106 [2024-10-01 06:08:40.667542] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:15.106 [2024-10-01 06:08:40.667558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.106 [2024-10-01 06:08:40.667585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:15.106 [2024-10-01 06:08:40.667647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:16:15.106 [2024-10-01 06:08:40.667669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.667760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.106 [2024-10-01 06:08:40.667802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:15.106 [2024-10-01 06:08:40.667817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:15.106 [2024-10-01 06:08:40.667869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.106 [2024-10-01 06:08:40.667997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:15.106 [2024-10-01 06:08:40.668042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:15.106 [2024-10-01 06:08:40.668059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:15.106 [2024-10-01 06:08:40.668085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:15.106 [2024-10-01 06:08:40.668110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.106 [2024-10-01 06:08:40.668122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:15.106 [2024-10-01 06:08:40.668129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:15.106 [2024-10-01 06:08:40.668134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:15.106 [2024-10-01 06:08:40.668142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:15.106 [2024-10-01 06:08:40.668148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:15.106 [2024-10-01 06:08:40.668154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:15.106 [2024-10-01 06:08:40.668166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:15.106 [2024-10-01 06:08:40.668184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:15.106 [2024-10-01 06:08:40.668204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:15.106 [2024-10-01 06:08:40.668220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:15.106 [2024-10-01 06:08:40.668253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:15.106 [2024-10-01 06:08:40.668269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.106 [2024-10-01 06:08:40.668280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:15.106 [2024-10-01 06:08:40.668287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:15.106 [2024-10-01 06:08:40.668292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:15.106 [2024-10-01 06:08:40.668299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:15.106 [2024-10-01 06:08:40.668305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:15.106 [2024-10-01 06:08:40.668313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:15.106 [2024-10-01 06:08:40.668326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:15.106 [2024-10-01 06:08:40.668331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668338] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:15.106 [2024-10-01 06:08:40.668344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:15.106 [2024-10-01 06:08:40.668353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:15.106 [2024-10-01 06:08:40.668367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:15.106 [2024-10-01 06:08:40.668372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:15.106 [2024-10-01 06:08:40.668379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:15.106 [2024-10-01 06:08:40.668384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:15.106 [2024-10-01 06:08:40.668390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:15.106 [2024-10-01 06:08:40.668395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:15.106 [2024-10-01 06:08:40.668406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:15.106 [2024-10-01 06:08:40.668414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.106 [2024-10-01 06:08:40.668422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:15.107 [2024-10-01 06:08:40.668428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:15.107 [2024-10-01 06:08:40.668435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:15.107 [2024-10-01 06:08:40.668441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:15.107 [2024-10-01 06:08:40.668447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:15.107 [2024-10-01 06:08:40.668453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:15.107 [2024-10-01 06:08:40.668462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:15.107 [2024-10-01 06:08:40.668467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:15.107 [2024-10-01 06:08:40.668474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:15.107 [2024-10-01 06:08:40.668480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:15.107 [2024-10-01 06:08:40.668486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:15.107 [2024-10-01 06:08:40.668492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:15.107 [2024-10-01 06:08:40.668499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:15.107 [2024-10-01 06:08:40.668505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:15.107 [2024-10-01 06:08:40.668511] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:15.107 [2024-10-01 06:08:40.668518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:15.107 [2024-10-01 06:08:40.668527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:15.107 [2024-10-01 06:08:40.668533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:15.107 [2024-10-01 06:08:40.668540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:15.107 [2024-10-01 06:08:40.668546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:15.107 [2024-10-01 06:08:40.668554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.107 [2024-10-01 06:08:40.668560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:15.107 [2024-10-01 06:08:40.668571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:16:15.107 [2024-10-01 06:08:40.668578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.107 [2024-10-01 06:08:40.668646] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:15.107 [2024-10-01 06:08:40.668654] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:17.637 [2024-10-01 06:08:43.070129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.070345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:17.637 [2024-10-01 06:08:43.070417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2401.462 ms 00:16:17.637 [2024-10-01 06:08:43.070443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.091896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.092118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:17.637 [2024-10-01 06:08:43.092196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.330 ms 00:16:17.637 [2024-10-01 06:08:43.092223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.092438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.092613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:17.637 [2024-10-01 06:08:43.092629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:17.637 [2024-10-01 06:08:43.092639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.103752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.103794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:17.637 [2024-10-01 06:08:43.103808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.067 ms 00:16:17.637 [2024-10-01 06:08:43.103817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.103957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.103971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:17.637 [2024-10-01 06:08:43.103997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:17.637 [2024-10-01 06:08:43.104017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.104447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.104474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:17.637 [2024-10-01 06:08:43.104488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:16:17.637 [2024-10-01 06:08:43.104498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.104672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.104727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:17.637 [2024-10-01 06:08:43.104753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:17.637 [2024-10-01 06:08:43.104763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.112055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.112100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:17.637 [2024-10-01 06:08:43.112112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.249 ms 00:16:17.637 [2024-10-01 06:08:43.112120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.121271] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:17.637 [2024-10-01 06:08:43.138508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.138550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:17.637 [2024-10-01 06:08:43.138563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.288 ms 00:16:17.637 [2024-10-01 06:08:43.138572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.200748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.200822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:17.637 [2024-10-01 06:08:43.200836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.087 ms 00:16:17.637 [2024-10-01 06:08:43.200875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.201076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.201100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:17.637 [2024-10-01 06:08:43.201112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:16:17.637 [2024-10-01 06:08:43.201123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.204812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.204865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:17.637 [2024-10-01 06:08:43.204877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.662 ms 00:16:17.637 [2024-10-01 06:08:43.204887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.207491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.207525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:17.637 [2024-10-01 06:08:43.207535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:16:17.637 [2024-10-01 06:08:43.207544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.637 [2024-10-01 06:08:43.207908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.637 [2024-10-01 06:08:43.207928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:17.638 [2024-10-01 06:08:43.207941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:16:17.638 [2024-10-01 06:08:43.207953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.638 [2024-10-01 06:08:43.238084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.638 [2024-10-01 06:08:43.238270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:17.638 [2024-10-01 06:08:43.238290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.094 ms 00:16:17.638 [2024-10-01 06:08:43.238301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.638 [2024-10-01 06:08:43.242660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.638 [2024-10-01 06:08:43.242700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:17.638 [2024-10-01 06:08:43.242714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:16:17.638 [2024-10-01 06:08:43.242727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.638 [2024-10-01 06:08:43.246228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.638 [2024-10-01 06:08:43.246261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:17.638 [2024-10-01 06:08:43.246271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.451 ms 00:16:17.638 [2024-10-01 06:08:43.246280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.638 [2024-10-01 06:08:43.249878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.638 [2024-10-01 06:08:43.249911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:17.638 [2024-10-01 06:08:43.249920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:16:17.638 [2024-10-01 06:08:43.249933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.638 [2024-10-01 06:08:43.249997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.638 [2024-10-01 06:08:43.250020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:17.638 [2024-10-01 06:08:43.250031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:17.638 [2024-10-01 06:08:43.250044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.638 [2024-10-01 06:08:43.250129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.638 [2024-10-01 06:08:43.250141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:17.638 [2024-10-01 06:08:43.250150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:17.638 [2024-10-01 06:08:43.250160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.638 [2024-10-01 06:08:43.251217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:17.638 [2024-10-01 06:08:43.252232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2598.753 ms, result 0 00:16:17.897 [2024-10-01 06:08:43.252931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:17.897 { 00:16:17.897 "name": "ftl0", 00:16:17.897 "uuid": "f441fd4a-13ea-4411-bf3b-066c792337e7" 00:16:17.897 } 00:16:17.897 06:08:43 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:17.897 06:08:43 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:17.897 06:08:43 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.897 06:08:43 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:17.897 06:08:43 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.897 06:08:43 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.897 06:08:43 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:17.897 06:08:43 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:18.156 [ 00:16:18.156 { 00:16:18.156 "name": "ftl0", 00:16:18.156 "aliases": [ 00:16:18.156 "f441fd4a-13ea-4411-bf3b-066c792337e7" 00:16:18.156 ], 00:16:18.156 "product_name": "FTL disk", 00:16:18.156 "block_size": 4096, 00:16:18.156 "num_blocks": 23592960, 00:16:18.156 "uuid": "f441fd4a-13ea-4411-bf3b-066c792337e7", 00:16:18.156 "assigned_rate_limits": { 00:16:18.156 "rw_ios_per_sec": 0, 00:16:18.156 "rw_mbytes_per_sec": 0, 00:16:18.156 "r_mbytes_per_sec": 0, 00:16:18.156 "w_mbytes_per_sec": 0 00:16:18.156 }, 00:16:18.156 "claimed": false, 00:16:18.156 "zoned": false, 00:16:18.156 "supported_io_types": { 00:16:18.156 "read": true, 00:16:18.156 "write": true, 00:16:18.156 "unmap": true, 00:16:18.156 "flush": true, 00:16:18.156 "reset": false, 00:16:18.156 "nvme_admin": false, 00:16:18.156 "nvme_io": false, 00:16:18.156 "nvme_io_md": false, 00:16:18.156 "write_zeroes": true, 00:16:18.156 "zcopy": false, 00:16:18.156 "get_zone_info": false, 00:16:18.156 "zone_management": false, 00:16:18.156 "zone_append": false, 00:16:18.156 "compare": false, 00:16:18.156 "compare_and_write": false, 00:16:18.156 "abort": false, 00:16:18.156 "seek_hole": false, 00:16:18.156 "seek_data": false, 00:16:18.156 "copy": false, 00:16:18.156 "nvme_iov_md": false 00:16:18.156 }, 00:16:18.156 "driver_specific": { 00:16:18.156 "ftl": { 00:16:18.156 "base_bdev": "0581dc61-7e72-42ef-8f00-47cdb1f936bc", 00:16:18.156 "cache": "nvc0n1p0" 00:16:18.156 } 00:16:18.156 } 00:16:18.156 } 00:16:18.156 ] 00:16:18.156 06:08:43 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:18.156 06:08:43 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:18.156 06:08:43 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:18.415 06:08:43 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:18.415 06:08:43 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:18.674 06:08:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:18.674 { 00:16:18.674 "name": "ftl0", 00:16:18.674 "aliases": [ 00:16:18.674 "f441fd4a-13ea-4411-bf3b-066c792337e7" 00:16:18.674 ], 00:16:18.674 "product_name": "FTL disk", 00:16:18.674 "block_size": 4096, 00:16:18.674 "num_blocks": 23592960, 00:16:18.674 "uuid": "f441fd4a-13ea-4411-bf3b-066c792337e7", 00:16:18.674 "assigned_rate_limits": { 00:16:18.674 "rw_ios_per_sec": 0, 00:16:18.674 "rw_mbytes_per_sec": 0, 00:16:18.674 "r_mbytes_per_sec": 0, 00:16:18.674 "w_mbytes_per_sec": 0 00:16:18.674 }, 00:16:18.674 "claimed": false, 00:16:18.674 "zoned": false, 00:16:18.674 "supported_io_types": { 00:16:18.674 "read": true, 00:16:18.674 "write": true, 00:16:18.674 "unmap": true, 00:16:18.674 "flush": true, 00:16:18.674 "reset": false, 00:16:18.674 "nvme_admin": false, 00:16:18.674 "nvme_io": false, 00:16:18.674 "nvme_io_md": false, 00:16:18.674 "write_zeroes": true, 00:16:18.674 "zcopy": false, 00:16:18.674 "get_zone_info": false, 00:16:18.674 "zone_management": false, 00:16:18.674 "zone_append": false, 00:16:18.674 "compare": false, 00:16:18.674 "compare_and_write": false, 00:16:18.674 "abort": false, 00:16:18.674 "seek_hole": false, 00:16:18.674 "seek_data": false, 00:16:18.674 "copy": false, 00:16:18.674 "nvme_iov_md": false 00:16:18.674 }, 00:16:18.674 "driver_specific": { 00:16:18.674 "ftl": { 00:16:18.674 "base_bdev": "0581dc61-7e72-42ef-8f00-47cdb1f936bc", 00:16:18.674 "cache": "nvc0n1p0" 00:16:18.674 } 00:16:18.674 } 00:16:18.674 } 00:16:18.674 ]' 00:16:18.674 06:08:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:18.674 06:08:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:18.674 06:08:44 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:18.674 [2024-10-01 06:08:44.281083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.674 [2024-10-01 06:08:44.281140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:18.674 [2024-10-01 06:08:44.281156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:18.674 [2024-10-01 06:08:44.281178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.674 [2024-10-01 06:08:44.281229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:18.674 [2024-10-01 06:08:44.281789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.674 [2024-10-01 06:08:44.281811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:18.674 [2024-10-01 06:08:44.281821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:16:18.674 [2024-10-01 06:08:44.281834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.674 [2024-10-01 06:08:44.282433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.674 [2024-10-01 06:08:44.282453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:18.674 [2024-10-01 06:08:44.282462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:16:18.674 [2024-10-01 06:08:44.282474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.674 [2024-10-01 06:08:44.286152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.674 [2024-10-01 06:08:44.286177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:18.674 [2024-10-01 06:08:44.286188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.633 ms 00:16:18.674 [2024-10-01 06:08:44.286199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.293190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.961 [2024-10-01 06:08:44.293229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:18.961 [2024-10-01 06:08:44.293239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:16:18.961 [2024-10-01 06:08:44.293252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.295123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.961 [2024-10-01 06:08:44.295162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:18.961 [2024-10-01 06:08:44.295172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:16:18.961 [2024-10-01 06:08:44.295182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.299732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.961 [2024-10-01 06:08:44.299771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:18.961 [2024-10-01 06:08:44.299782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.503 ms 00:16:18.961 [2024-10-01 06:08:44.299792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.300005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.961 [2024-10-01 06:08:44.300038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:18.961 [2024-10-01 06:08:44.300047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:16:18.961 [2024-10-01 06:08:44.300060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.302206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.961 [2024-10-01 06:08:44.302240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:18.961 [2024-10-01 06:08:44.302248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:16:18.961 [2024-10-01 06:08:44.302261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.303790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.961 [2024-10-01 06:08:44.303824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:18.961 [2024-10-01 06:08:44.303833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:16:18.961 [2024-10-01 06:08:44.303842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.305174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.961 [2024-10-01 06:08:44.305214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:18.961 [2024-10-01 06:08:44.305223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:16:18.961 [2024-10-01 06:08:44.305233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.306295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.961 [2024-10-01 06:08:44.306332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:18.961 [2024-10-01 06:08:44.306341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:16:18.961 [2024-10-01 06:08:44.306351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.961 [2024-10-01 06:08:44.306390] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:18.961 [2024-10-01 06:08:44.306407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:18.961 [2024-10-01 06:08:44.306632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.306996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:18.962 [2024-10-01 06:08:44.307342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:18.962 [2024-10-01 06:08:44.307350] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f441fd4a-13ea-4411-bf3b-066c792337e7 00:16:18.962 [2024-10-01 06:08:44.307361] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:18.962 [2024-10-01 06:08:44.307368] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:18.962 [2024-10-01 06:08:44.307377] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:18.962 [2024-10-01 06:08:44.307385] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:18.962 [2024-10-01 06:08:44.307404] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:18.962 [2024-10-01 06:08:44.307412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:18.962 [2024-10-01 06:08:44.307433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:18.962 [2024-10-01 06:08:44.307440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:18.962 [2024-10-01 06:08:44.307449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:18.962 [2024-10-01 06:08:44.307456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.962 [2024-10-01 06:08:44.307466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:18.962 [2024-10-01 06:08:44.307474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:16:18.962 [2024-10-01 06:08:44.307485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.962 [2024-10-01 06:08:44.309455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.962 [2024-10-01 06:08:44.309483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:18.963 [2024-10-01 06:08:44.309495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:16:18.963 [2024-10-01 06:08:44.309508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.309628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.963 [2024-10-01 06:08:44.309640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:18.963 [2024-10-01 06:08:44.309650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:18.963 [2024-10-01 06:08:44.309660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.316253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.316416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:18.963 [2024-10-01 06:08:44.316433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.316446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.316559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.316572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:18.963 [2024-10-01 06:08:44.316580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.316592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.316653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.316666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:18.963 [2024-10-01 06:08:44.316673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.316683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.316719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.316729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:18.963 [2024-10-01 06:08:44.316737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.316746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.328920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.328969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:18.963 [2024-10-01 06:08:44.328981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.328994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.338922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.339122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:18.963 [2024-10-01 06:08:44.339138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.339151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.339234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.339247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:18.963 [2024-10-01 06:08:44.339256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.339265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.339316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.339331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:18.963 [2024-10-01 06:08:44.339339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.339349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.339446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.339458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:18.963 [2024-10-01 06:08:44.339466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.339476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.339530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.339545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:18.963 [2024-10-01 06:08:44.339553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.339565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.339615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.339627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:18.963 [2024-10-01 06:08:44.339635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.339645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.339704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.963 [2024-10-01 06:08:44.339719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:18.963 [2024-10-01 06:08:44.339728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.963 [2024-10-01 06:08:44.339739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.963 [2024-10-01 06:08:44.339956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.842 ms, result 0 00:16:18.963 true 00:16:18.963 06:08:44 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 84974 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 84974 ']' 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 84974 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84974 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.963 killing process with pid 84974 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84974' 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 84974 00:16:18.963 06:08:44 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 84974 00:16:24.232 06:08:48 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:24.232 65536+0 records in 00:16:24.232 65536+0 records out 00:16:24.232 268435456 bytes (268 MB, 256 MiB) copied, 0.960723 s, 279 MB/s 00:16:24.232 06:08:49 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.492 [2024-10-01 06:08:49.900991] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:24.492 [2024-10-01 06:08:49.901099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85139 ] 00:16:24.492 [2024-10-01 06:08:50.030091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.492 [2024-10-01 06:08:50.073641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.752 [2024-10-01 06:08:50.176295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.752 [2024-10-01 06:08:50.176372] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.752 [2024-10-01 06:08:50.331030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.331284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.752 [2024-10-01 06:08:50.331304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:24.752 [2024-10-01 06:08:50.331314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.333674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.333715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.752 [2024-10-01 06:08:50.333727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:16:24.752 [2024-10-01 06:08:50.333735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.333809] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.752 [2024-10-01 06:08:50.334057] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.752 [2024-10-01 06:08:50.334074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.334082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.752 [2024-10-01 06:08:50.334094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:16:24.752 [2024-10-01 06:08:50.334102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.335531] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:24.752 [2024-10-01 06:08:50.338353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.338386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:24.752 [2024-10-01 06:08:50.338401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:16:24.752 [2024-10-01 06:08:50.338408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.338559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.338571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:24.752 [2024-10-01 06:08:50.338580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:24.752 [2024-10-01 06:08:50.338587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.345089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.345283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.752 [2024-10-01 06:08:50.345299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.460 ms 00:16:24.752 [2024-10-01 06:08:50.345307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.345444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.345458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.752 [2024-10-01 06:08:50.345469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:24.752 [2024-10-01 06:08:50.345476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.345501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.345509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.752 [2024-10-01 06:08:50.345524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:24.752 [2024-10-01 06:08:50.345531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.345552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:24.752 [2024-10-01 06:08:50.347247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.347275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.752 [2024-10-01 06:08:50.347285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:16:24.752 [2024-10-01 06:08:50.347293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.347329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.347342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.752 [2024-10-01 06:08:50.347352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:24.752 [2024-10-01 06:08:50.347360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.347379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:24.752 [2024-10-01 06:08:50.347398] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:24.752 [2024-10-01 06:08:50.347440] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:24.752 [2024-10-01 06:08:50.347455] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:24.752 [2024-10-01 06:08:50.347561] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:24.752 [2024-10-01 06:08:50.347573] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.752 [2024-10-01 06:08:50.347583] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:24.752 [2024-10-01 06:08:50.347593] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.752 [2024-10-01 06:08:50.347602] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.752 [2024-10-01 06:08:50.347614] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:24.752 [2024-10-01 06:08:50.347622] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.752 [2024-10-01 06:08:50.347630] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:24.752 [2024-10-01 06:08:50.347638] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:24.752 [2024-10-01 06:08:50.347645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.347658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.752 [2024-10-01 06:08:50.347668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:16:24.752 [2024-10-01 06:08:50.347677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.347765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.752 [2024-10-01 06:08:50.347779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.752 [2024-10-01 06:08:50.347787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:24.752 [2024-10-01 06:08:50.347794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.752 [2024-10-01 06:08:50.347911] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.752 [2024-10-01 06:08:50.347924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.752 [2024-10-01 06:08:50.347933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.752 [2024-10-01 06:08:50.347945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.752 [2024-10-01 06:08:50.347955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.752 [2024-10-01 06:08:50.347964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.752 [2024-10-01 06:08:50.347972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:24.752 [2024-10-01 06:08:50.347981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.752 [2024-10-01 06:08:50.347990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:24.752 [2024-10-01 06:08:50.348001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.752 [2024-10-01 06:08:50.348009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.752 [2024-10-01 06:08:50.348017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:24.752 [2024-10-01 06:08:50.348024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.752 [2024-10-01 06:08:50.348032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.752 [2024-10-01 06:08:50.348040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:24.752 [2024-10-01 06:08:50.348048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.752 [2024-10-01 06:08:50.348056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.752 [2024-10-01 06:08:50.348064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:24.752 [2024-10-01 06:08:50.348072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.752 [2024-10-01 06:08:50.348080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.752 [2024-10-01 06:08:50.348087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:24.752 [2024-10-01 06:08:50.348095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.752 [2024-10-01 06:08:50.348103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.752 [2024-10-01 06:08:50.348110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:24.752 [2024-10-01 06:08:50.348118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.752 [2024-10-01 06:08:50.348131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.752 [2024-10-01 06:08:50.348140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:24.752 [2024-10-01 06:08:50.348147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.752 [2024-10-01 06:08:50.348155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.752 [2024-10-01 06:08:50.348162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:24.752 [2024-10-01 06:08:50.348169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.752 [2024-10-01 06:08:50.348177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.753 [2024-10-01 06:08:50.348184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:24.753 [2024-10-01 06:08:50.348192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.753 [2024-10-01 06:08:50.348199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.753 [2024-10-01 06:08:50.348208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:24.753 [2024-10-01 06:08:50.348215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.753 [2024-10-01 06:08:50.348224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:24.753 [2024-10-01 06:08:50.348232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:24.753 [2024-10-01 06:08:50.348239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.753 [2024-10-01 06:08:50.348247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:24.753 [2024-10-01 06:08:50.348257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:24.753 [2024-10-01 06:08:50.348266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.753 [2024-10-01 06:08:50.348274] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.753 [2024-10-01 06:08:50.348281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.753 [2024-10-01 06:08:50.348288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.753 [2024-10-01 06:08:50.348296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.753 [2024-10-01 06:08:50.348303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.753 [2024-10-01 06:08:50.348310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.753 [2024-10-01 06:08:50.348317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.753 [2024-10-01 06:08:50.348324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.753 [2024-10-01 06:08:50.348331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.753 [2024-10-01 06:08:50.348338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.753 [2024-10-01 06:08:50.348346] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.753 [2024-10-01 06:08:50.348355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.753 [2024-10-01 06:08:50.348363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:24.753 [2024-10-01 06:08:50.348370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:24.753 [2024-10-01 06:08:50.348378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:24.753 [2024-10-01 06:08:50.348385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:24.753 [2024-10-01 06:08:50.348393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:24.753 [2024-10-01 06:08:50.348400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:24.753 [2024-10-01 06:08:50.348407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:24.753 [2024-10-01 06:08:50.348419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:24.753 [2024-10-01 06:08:50.348426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:24.753 [2024-10-01 06:08:50.348433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:24.753 [2024-10-01 06:08:50.348440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:24.753 [2024-10-01 06:08:50.348447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:24.753 [2024-10-01 06:08:50.348453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:24.753 [2024-10-01 06:08:50.348461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:24.753 [2024-10-01 06:08:50.348471] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.753 [2024-10-01 06:08:50.348479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.753 [2024-10-01 06:08:50.348488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.753 [2024-10-01 06:08:50.348495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.753 [2024-10-01 06:08:50.348504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.753 [2024-10-01 06:08:50.348512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.753 [2024-10-01 06:08:50.348519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.753 [2024-10-01 06:08:50.348526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.753 [2024-10-01 06:08:50.348536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:16:24.753 [2024-10-01 06:08:50.348546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-10-01 06:08:50.371243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-10-01 06:08:50.371459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.012 [2024-10-01 06:08:50.371554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.642 ms 00:16:25.012 [2024-10-01 06:08:50.371601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-10-01 06:08:50.371829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-10-01 06:08:50.372008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:25.012 [2024-10-01 06:08:50.372044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:16:25.012 [2024-10-01 06:08:50.372079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-10-01 06:08:50.382441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-10-01 06:08:50.382561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.012 [2024-10-01 06:08:50.382619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.260 ms 00:16:25.012 [2024-10-01 06:08:50.382642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-10-01 06:08:50.382758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-10-01 06:08:50.382821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.012 [2024-10-01 06:08:50.382884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:25.012 [2024-10-01 06:08:50.382908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-10-01 06:08:50.383342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-10-01 06:08:50.383422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.012 [2024-10-01 06:08:50.383464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:16:25.012 [2024-10-01 06:08:50.383487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-10-01 06:08:50.383917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-10-01 06:08:50.384015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.013 [2024-10-01 06:08:50.384070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:16:25.013 [2024-10-01 06:08:50.384098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.390203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.390307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.013 [2024-10-01 06:08:50.390361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.068 ms 00:16:25.013 [2024-10-01 06:08:50.390383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.393172] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:25.013 [2024-10-01 06:08:50.393215] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:25.013 [2024-10-01 06:08:50.393233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.393242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:25.013 [2024-10-01 06:08:50.393251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:16:25.013 [2024-10-01 06:08:50.393259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.410441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.410475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:25.013 [2024-10-01 06:08:50.410486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.123 ms 00:16:25.013 [2024-10-01 06:08:50.410494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.412399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.412518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:25.013 [2024-10-01 06:08:50.412532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.836 ms 00:16:25.013 [2024-10-01 06:08:50.412539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.414030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.414059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:25.013 [2024-10-01 06:08:50.414074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:16:25.013 [2024-10-01 06:08:50.414082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.414402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.414420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:25.013 [2024-10-01 06:08:50.414429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:16:25.013 [2024-10-01 06:08:50.414437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.433200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.433271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:25.013 [2024-10-01 06:08:50.433284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.734 ms 00:16:25.013 [2024-10-01 06:08:50.433293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.441331] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:25.013 [2024-10-01 06:08:50.458624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.458667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:25.013 [2024-10-01 06:08:50.458681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.253 ms 00:16:25.013 [2024-10-01 06:08:50.458689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.458790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.458802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:25.013 [2024-10-01 06:08:50.458812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:25.013 [2024-10-01 06:08:50.458820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.458908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.458922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:25.013 [2024-10-01 06:08:50.458931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:25.013 [2024-10-01 06:08:50.458939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.458964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.458972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:25.013 [2024-10-01 06:08:50.458985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:25.013 [2024-10-01 06:08:50.458993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.459029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:25.013 [2024-10-01 06:08:50.459038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.459046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:25.013 [2024-10-01 06:08:50.459056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:25.013 [2024-10-01 06:08:50.459065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.463054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.463235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:25.013 [2024-10-01 06:08:50.463259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:16:25.013 [2024-10-01 06:08:50.463271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.463362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.013 [2024-10-01 06:08:50.463373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:25.013 [2024-10-01 06:08:50.463382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:25.013 [2024-10-01 06:08:50.463393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.013 [2024-10-01 06:08:50.464280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:25.013 [2024-10-01 06:08:50.465312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.955 ms, result 0 00:16:25.013 [2024-10-01 06:08:50.465992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:25.013 [2024-10-01 06:08:50.475712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:31.628  Copying: 39/256 [MB] (39 MBps) Copying: 78/256 [MB] (39 MBps) Copying: 117/256 [MB] (39 MBps) Copying: 157/256 [MB] (39 MBps) Copying: 197/256 [MB] (39 MBps) Copying: 237/256 [MB] (39 MBps) Copying: 256/256 [MB] (average 39 MBps)[2024-10-01 06:08:56.940978] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.628 [2024-10-01 06:08:56.942402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.942553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:31.628 [2024-10-01 06:08:56.942573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:31.628 [2024-10-01 06:08:56.942582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.942618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:31.628 [2024-10-01 06:08:56.943171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.943195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:31.628 [2024-10-01 06:08:56.943204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:16:31.628 [2024-10-01 06:08:56.943212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.944736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.944775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:31.628 [2024-10-01 06:08:56.944785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.503 ms 00:16:31.628 [2024-10-01 06:08:56.944793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.951766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.951799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:31.628 [2024-10-01 06:08:56.951808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.956 ms 00:16:31.628 [2024-10-01 06:08:56.951816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.958834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.958868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:31.628 [2024-10-01 06:08:56.958885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:16:31.628 [2024-10-01 06:08:56.958893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.960393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.960423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:31.628 [2024-10-01 06:08:56.960433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:16:31.628 [2024-10-01 06:08:56.960442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.964450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.964486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:31.628 [2024-10-01 06:08:56.964495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.976 ms 00:16:31.628 [2024-10-01 06:08:56.964511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.964631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.964641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:31.628 [2024-10-01 06:08:56.964650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:31.628 [2024-10-01 06:08:56.964657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.966431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.966459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:31.628 [2024-10-01 06:08:56.966469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.757 ms 00:16:31.628 [2024-10-01 06:08:56.966477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.628 [2024-10-01 06:08:56.968012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.628 [2024-10-01 06:08:56.968039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:31.628 [2024-10-01 06:08:56.968048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:16:31.628 [2024-10-01 06:08:56.968055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.629 [2024-10-01 06:08:56.969125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.629 [2024-10-01 06:08:56.969278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:31.629 [2024-10-01 06:08:56.969292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:16:31.629 [2024-10-01 06:08:56.969298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.629 [2024-10-01 06:08:56.970688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.629 [2024-10-01 06:08:56.970717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:31.629 [2024-10-01 06:08:56.970726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:16:31.629 [2024-10-01 06:08:56.970732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.629 [2024-10-01 06:08:56.970760] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:31.629 [2024-10-01 06:08:56.970776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.970998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:31.629 [2024-10-01 06:08:56.971434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:31.630 [2024-10-01 06:08:56.971571] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:31.630 [2024-10-01 06:08:56.971579] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f441fd4a-13ea-4411-bf3b-066c792337e7 00:16:31.630 [2024-10-01 06:08:56.971587] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:31.630 [2024-10-01 06:08:56.971596] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:31.630 [2024-10-01 06:08:56.971603] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:31.630 [2024-10-01 06:08:56.971611] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:31.630 [2024-10-01 06:08:56.971626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:31.630 [2024-10-01 06:08:56.971634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:31.630 [2024-10-01 06:08:56.971641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:31.630 [2024-10-01 06:08:56.971647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:31.630 [2024-10-01 06:08:56.971654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:31.630 [2024-10-01 06:08:56.971660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.630 [2024-10-01 06:08:56.971668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:31.630 [2024-10-01 06:08:56.971676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:16:31.630 [2024-10-01 06:08:56.971686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:56.973541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.630 [2024-10-01 06:08:56.973569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:31.630 [2024-10-01 06:08:56.973579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:16:31.630 [2024-10-01 06:08:56.973588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:56.973696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.630 [2024-10-01 06:08:56.973707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:31.630 [2024-10-01 06:08:56.973720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:31.630 [2024-10-01 06:08:56.973728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:56.979410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:56.979542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.630 [2024-10-01 06:08:56.979558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:56.979566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:56.979626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:56.979634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.630 [2024-10-01 06:08:56.979651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:56.979658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:56.979708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:56.979717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.630 [2024-10-01 06:08:56.979725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:56.979732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:56.979750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:56.979759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.630 [2024-10-01 06:08:56.979766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:56.979777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:56.991307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:56.991353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.630 [2024-10-01 06:08:56.991364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:56.991372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:57.000277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:57.000326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.630 [2024-10-01 06:08:57.000345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:57.000354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:57.000389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:57.000398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:31.630 [2024-10-01 06:08:57.000406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:57.000420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:57.000451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:57.000461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.630 [2024-10-01 06:08:57.000469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:57.000476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:57.000549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:57.000560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.630 [2024-10-01 06:08:57.000568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:57.000577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:57.000606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:57.000616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:31.630 [2024-10-01 06:08:57.000623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:57.000632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:57.000677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:57.000694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.630 [2024-10-01 06:08:57.000702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:57.000710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:57.000759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.630 [2024-10-01 06:08:57.000773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.630 [2024-10-01 06:08:57.000781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.630 [2024-10-01 06:08:57.000788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.630 [2024-10-01 06:08:57.001013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.582 ms, result 0 00:16:31.889 00:16:31.889 00:16:32.148 06:08:57 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85218 00:16:32.148 06:08:57 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85218 00:16:32.148 06:08:57 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85218 ']' 00:16:32.148 06:08:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.148 06:08:57 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.148 06:08:57 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.148 06:08:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.148 06:08:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:32.148 06:08:57 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:32.148 [2024-10-01 06:08:57.590368] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:32.148 [2024-10-01 06:08:57.590509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85218 ] 00:16:32.148 [2024-10-01 06:08:57.726447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.406 [2024-10-01 06:08:57.768804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.972 06:08:58 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.972 06:08:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:32.972 06:08:58 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:33.230 [2024-10-01 06:08:58.664366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.230 [2024-10-01 06:08:58.664433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:33.230 [2024-10-01 06:08:58.835434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.230 [2024-10-01 06:08:58.835491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:33.230 [2024-10-01 06:08:58.835505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:33.230 [2024-10-01 06:08:58.835515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.230 [2024-10-01 06:08:58.837871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.230 [2024-10-01 06:08:58.837904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.230 [2024-10-01 06:08:58.837916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.338 ms 00:16:33.230 [2024-10-01 06:08:58.837926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.230 [2024-10-01 06:08:58.837994] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:33.230 [2024-10-01 06:08:58.838243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:33.230 [2024-10-01 06:08:58.838257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.230 [2024-10-01 06:08:58.838267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.230 [2024-10-01 06:08:58.838283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:16:33.230 [2024-10-01 06:08:58.838293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.230 [2024-10-01 06:08:58.839991] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:33.230 [2024-10-01 06:08:58.842483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.230 [2024-10-01 06:08:58.842516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:33.230 [2024-10-01 06:08:58.842529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:16:33.231 [2024-10-01 06:08:58.842538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.231 [2024-10-01 06:08:58.842599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.231 [2024-10-01 06:08:58.842610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:33.231 [2024-10-01 06:08:58.842624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:33.231 [2024-10-01 06:08:58.842632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.490 [2024-10-01 06:08:58.849070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.490 [2024-10-01 06:08:58.849098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.490 [2024-10-01 06:08:58.849110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.382 ms 00:16:33.490 [2024-10-01 06:08:58.849118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.490 [2024-10-01 06:08:58.849257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.490 [2024-10-01 06:08:58.849269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.490 [2024-10-01 06:08:58.849280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:33.490 [2024-10-01 06:08:58.849288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.490 [2024-10-01 06:08:58.849321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.491 [2024-10-01 06:08:58.849330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:33.491 [2024-10-01 06:08:58.849340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:33.491 [2024-10-01 06:08:58.849351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.491 [2024-10-01 06:08:58.849378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:33.491 [2024-10-01 06:08:58.851028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.491 [2024-10-01 06:08:58.851057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.491 [2024-10-01 06:08:58.851070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:16:33.491 [2024-10-01 06:08:58.851079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.491 [2024-10-01 06:08:58.851123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.491 [2024-10-01 06:08:58.851132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:33.491 [2024-10-01 06:08:58.851141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:33.491 [2024-10-01 06:08:58.851150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.491 [2024-10-01 06:08:58.851176] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:33.491 [2024-10-01 06:08:58.851195] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:33.491 [2024-10-01 06:08:58.851233] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:33.491 [2024-10-01 06:08:58.851260] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:33.491 [2024-10-01 06:08:58.851364] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:33.491 [2024-10-01 06:08:58.851376] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:33.491 [2024-10-01 06:08:58.851387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:33.491 [2024-10-01 06:08:58.851399] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:33.491 [2024-10-01 06:08:58.851408] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:33.491 [2024-10-01 06:08:58.851424] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:33.491 [2024-10-01 06:08:58.851435] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:33.491 [2024-10-01 06:08:58.851444] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:33.491 [2024-10-01 06:08:58.851451] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:33.491 [2024-10-01 06:08:58.851459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.491 [2024-10-01 06:08:58.851469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:33.491 [2024-10-01 06:08:58.851479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:16:33.491 [2024-10-01 06:08:58.851489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.491 [2024-10-01 06:08:58.851577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.491 [2024-10-01 06:08:58.851584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:33.491 [2024-10-01 06:08:58.851593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:33.491 [2024-10-01 06:08:58.851600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.491 [2024-10-01 06:08:58.851702] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:33.491 [2024-10-01 06:08:58.851713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:33.491 [2024-10-01 06:08:58.851725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.491 [2024-10-01 06:08:58.851737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:33.491 [2024-10-01 06:08:58.851760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:33.491 [2024-10-01 06:08:58.851777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:33.491 [2024-10-01 06:08:58.851787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.491 [2024-10-01 06:08:58.851804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:33.491 [2024-10-01 06:08:58.851811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:33.491 [2024-10-01 06:08:58.851821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.491 [2024-10-01 06:08:58.851829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:33.491 [2024-10-01 06:08:58.851838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:33.491 [2024-10-01 06:08:58.851864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:33.491 [2024-10-01 06:08:58.851882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:33.491 [2024-10-01 06:08:58.851892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:33.491 [2024-10-01 06:08:58.851910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.491 [2024-10-01 06:08:58.851929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:33.491 [2024-10-01 06:08:58.851937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.491 [2024-10-01 06:08:58.851953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:33.491 [2024-10-01 06:08:58.851962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.491 [2024-10-01 06:08:58.851979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:33.491 [2024-10-01 06:08:58.851987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:33.491 [2024-10-01 06:08:58.851996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:33.491 [2024-10-01 06:08:58.852004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:33.491 [2024-10-01 06:08:58.852013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:33.491 [2024-10-01 06:08:58.852020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.491 [2024-10-01 06:08:58.852029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:33.491 [2024-10-01 06:08:58.852037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:33.491 [2024-10-01 06:08:58.852049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.491 [2024-10-01 06:08:58.852057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:33.491 [2024-10-01 06:08:58.852066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:33.491 [2024-10-01 06:08:58.852072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.491 [2024-10-01 06:08:58.852081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:33.491 [2024-10-01 06:08:58.852087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:33.491 [2024-10-01 06:08:58.852096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.491 [2024-10-01 06:08:58.852103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:33.491 [2024-10-01 06:08:58.852112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:33.491 [2024-10-01 06:08:58.852119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.491 [2024-10-01 06:08:58.852128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.491 [2024-10-01 06:08:58.852136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:33.491 [2024-10-01 06:08:58.852144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:33.491 [2024-10-01 06:08:58.852151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:33.491 [2024-10-01 06:08:58.852160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:33.491 [2024-10-01 06:08:58.852167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:33.491 [2024-10-01 06:08:58.852177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:33.491 [2024-10-01 06:08:58.852185] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:33.491 [2024-10-01 06:08:58.852196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.491 [2024-10-01 06:08:58.852207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:33.491 [2024-10-01 06:08:58.852216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:33.491 [2024-10-01 06:08:58.852223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:33.491 [2024-10-01 06:08:58.852231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:33.491 [2024-10-01 06:08:58.852238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:33.491 [2024-10-01 06:08:58.852247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:33.491 [2024-10-01 06:08:58.852255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:33.491 [2024-10-01 06:08:58.852263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:33.491 [2024-10-01 06:08:58.852270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:33.491 [2024-10-01 06:08:58.852278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:33.491 [2024-10-01 06:08:58.852285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:33.492 [2024-10-01 06:08:58.852294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:33.492 [2024-10-01 06:08:58.852301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:33.492 [2024-10-01 06:08:58.852312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:33.492 [2024-10-01 06:08:58.852325] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:33.492 [2024-10-01 06:08:58.852334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.492 [2024-10-01 06:08:58.852342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:33.492 [2024-10-01 06:08:58.852352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:33.492 [2024-10-01 06:08:58.852359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:33.492 [2024-10-01 06:08:58.852367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:33.492 [2024-10-01 06:08:58.852374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.852385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:33.492 [2024-10-01 06:08:58.852393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:16:33.492 [2024-10-01 06:08:58.852401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.864001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.864031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.492 [2024-10-01 06:08:58.864043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.540 ms 00:16:33.492 [2024-10-01 06:08:58.864054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.864179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.864199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.492 [2024-10-01 06:08:58.864210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:33.492 [2024-10-01 06:08:58.864220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.874504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.874538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.492 [2024-10-01 06:08:58.874548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.262 ms 00:16:33.492 [2024-10-01 06:08:58.874558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.874619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.874637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.492 [2024-10-01 06:08:58.874646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:33.492 [2024-10-01 06:08:58.874656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.875074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.875099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.492 [2024-10-01 06:08:58.875109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:16:33.492 [2024-10-01 06:08:58.875118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.875253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.875269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.492 [2024-10-01 06:08:58.875280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:33.492 [2024-10-01 06:08:58.875289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.892669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.892713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.492 [2024-10-01 06:08:58.892726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.355 ms 00:16:33.492 [2024-10-01 06:08:58.892737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.895611] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:33.492 [2024-10-01 06:08:58.895646] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:33.492 [2024-10-01 06:08:58.895660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.895672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:33.492 [2024-10-01 06:08:58.895682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.764 ms 00:16:33.492 [2024-10-01 06:08:58.895693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.911224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.911257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:33.492 [2024-10-01 06:08:58.911269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.482 ms 00:16:33.492 [2024-10-01 06:08:58.911286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.913177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.913215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:33.492 [2024-10-01 06:08:58.913224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:16:33.492 [2024-10-01 06:08:58.913234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.914576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.914606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:33.492 [2024-10-01 06:08:58.914615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:16:33.492 [2024-10-01 06:08:58.914625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.914996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.915016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:33.492 [2024-10-01 06:08:58.915028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:16:33.492 [2024-10-01 06:08:58.915042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.932704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.932760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:33.492 [2024-10-01 06:08:58.932774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.638 ms 00:16:33.492 [2024-10-01 06:08:58.932786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.940556] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:33.492 [2024-10-01 06:08:58.956936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.956975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:33.492 [2024-10-01 06:08:58.956990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.042 ms 00:16:33.492 [2024-10-01 06:08:58.956998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.957099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.957119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:33.492 [2024-10-01 06:08:58.957130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:33.492 [2024-10-01 06:08:58.957141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.957201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.957220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:33.492 [2024-10-01 06:08:58.957234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:33.492 [2024-10-01 06:08:58.957242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.957274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.957290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:33.492 [2024-10-01 06:08:58.957302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:33.492 [2024-10-01 06:08:58.957310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.957348] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:33.492 [2024-10-01 06:08:58.957359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.957368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:33.492 [2024-10-01 06:08:58.957376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:33.492 [2024-10-01 06:08:58.957385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.961244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.961278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:33.492 [2024-10-01 06:08:58.961289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:16:33.492 [2024-10-01 06:08:58.961299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.961392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.492 [2024-10-01 06:08:58.961405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:33.492 [2024-10-01 06:08:58.961414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:33.492 [2024-10-01 06:08:58.961423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.492 [2024-10-01 06:08:58.962330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:33.492 [2024-10-01 06:08:58.963329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.594 ms, result 0 00:16:33.492 [2024-10-01 06:08:58.964385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:33.492 Some configs were skipped because the RPC state that can call them passed over. 00:16:33.492 06:08:58 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:33.750 [2024-10-01 06:08:59.191701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.750 [2024-10-01 06:08:59.191766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:33.750 [2024-10-01 06:08:59.191782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.916 ms 00:16:33.750 [2024-10-01 06:08:59.191790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.750 [2024-10-01 06:08:59.191827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.058 ms, result 0 00:16:33.750 true 00:16:33.750 06:08:59 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:34.010 [2024-10-01 06:08:59.391162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.391222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:34.010 [2024-10-01 06:08:59.391234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:16:34.010 [2024-10-01 06:08:59.391244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.391281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.252 ms, result 0 00:16:34.010 true 00:16:34.010 06:08:59 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85218 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85218 ']' 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85218 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85218 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85218' 00:16:34.010 killing process with pid 85218 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85218 00:16:34.010 06:08:59 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85218 00:16:34.010 [2024-10-01 06:08:59.566239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.566303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:34.010 [2024-10-01 06:08:59.566322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:34.010 [2024-10-01 06:08:59.566331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.566358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:34.010 [2024-10-01 06:08:59.566923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.566950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:34.010 [2024-10-01 06:08:59.566960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:16:34.010 [2024-10-01 06:08:59.566970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.567262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.567286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:34.010 [2024-10-01 06:08:59.567296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:34.010 [2024-10-01 06:08:59.567306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.571474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.571509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:34.010 [2024-10-01 06:08:59.571519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.148 ms 00:16:34.010 [2024-10-01 06:08:59.571531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.578454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.578489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:34.010 [2024-10-01 06:08:59.578499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.889 ms 00:16:34.010 [2024-10-01 06:08:59.578511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.580046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.580081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:34.010 [2024-10-01 06:08:59.580090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:16:34.010 [2024-10-01 06:08:59.580100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.584093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.584128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:34.010 [2024-10-01 06:08:59.584143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.957 ms 00:16:34.010 [2024-10-01 06:08:59.584153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.584281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.584299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:34.010 [2024-10-01 06:08:59.584308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:34.010 [2024-10-01 06:08:59.584318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.586129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.586161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:34.010 [2024-10-01 06:08:59.586170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:16:34.010 [2024-10-01 06:08:59.586184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.587505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.587537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:34.010 [2024-10-01 06:08:59.587546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:16:34.010 [2024-10-01 06:08:59.587555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.588565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.588597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:34.010 [2024-10-01 06:08:59.588606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:16:34.010 [2024-10-01 06:08:59.588614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.589706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.010 [2024-10-01 06:08:59.589738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:34.010 [2024-10-01 06:08:59.589747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:16:34.010 [2024-10-01 06:08:59.589756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.010 [2024-10-01 06:08:59.589787] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:34.010 [2024-10-01 06:08:59.589805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:34.010 [2024-10-01 06:08:59.589816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:34.010 [2024-10-01 06:08:59.589828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:34.010 [2024-10-01 06:08:59.589836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:34.010 [2024-10-01 06:08:59.589864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:34.010 [2024-10-01 06:08:59.589873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:34.010 [2024-10-01 06:08:59.589882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.589994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:34.011 [2024-10-01 06:08:59.590640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:34.012 [2024-10-01 06:08:59.590649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:34.012 [2024-10-01 06:08:59.590656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:34.012 [2024-10-01 06:08:59.590668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:34.012 [2024-10-01 06:08:59.590677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:34.012 [2024-10-01 06:08:59.590694] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:34.012 [2024-10-01 06:08:59.590702] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f441fd4a-13ea-4411-bf3b-066c792337e7 00:16:34.012 [2024-10-01 06:08:59.590712] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:34.012 [2024-10-01 06:08:59.590719] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:34.012 [2024-10-01 06:08:59.590728] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:34.012 [2024-10-01 06:08:59.590738] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:34.012 [2024-10-01 06:08:59.590747] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:34.012 [2024-10-01 06:08:59.590755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:34.012 [2024-10-01 06:08:59.590767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:34.012 [2024-10-01 06:08:59.590773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:34.012 [2024-10-01 06:08:59.590781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:34.012 [2024-10-01 06:08:59.590788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.012 [2024-10-01 06:08:59.590801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:34.012 [2024-10-01 06:08:59.590810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:16:34.012 [2024-10-01 06:08:59.590821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.592626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.012 [2024-10-01 06:08:59.592656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:34.012 [2024-10-01 06:08:59.592665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.766 ms 00:16:34.012 [2024-10-01 06:08:59.592674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.592785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.012 [2024-10-01 06:08:59.592808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:34.012 [2024-10-01 06:08:59.592818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:34.012 [2024-10-01 06:08:59.592827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.599270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.599310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:34.012 [2024-10-01 06:08:59.599321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.599331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.599408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.599420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:34.012 [2024-10-01 06:08:59.599429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.599440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.599482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.599494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:34.012 [2024-10-01 06:08:59.599504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.599517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.599536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.599549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:34.012 [2024-10-01 06:08:59.599557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.599567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.611330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.611381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:34.012 [2024-10-01 06:08:59.611392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.611401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.620403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.620461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:34.012 [2024-10-01 06:08:59.620473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.620487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.620546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.620566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:34.012 [2024-10-01 06:08:59.620580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.620593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.620625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.620635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:34.012 [2024-10-01 06:08:59.620643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.620652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.620720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.620731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:34.012 [2024-10-01 06:08:59.620739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.620748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.620781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.620796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:34.012 [2024-10-01 06:08:59.620803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.620814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.620949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.620968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:34.012 [2024-10-01 06:08:59.620976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.620985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.621036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:34.012 [2024-10-01 06:08:59.621050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:34.012 [2024-10-01 06:08:59.621059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:34.012 [2024-10-01 06:08:59.621070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.012 [2024-10-01 06:08:59.621231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.952 ms, result 0 00:16:34.270 06:08:59 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:34.270 06:08:59 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:34.528 [2024-10-01 06:08:59.916989] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:34.529 [2024-10-01 06:08:59.917113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85255 ] 00:16:34.529 [2024-10-01 06:09:00.051265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.529 [2024-10-01 06:09:00.095342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.788 [2024-10-01 06:09:00.198634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.788 [2024-10-01 06:09:00.198709] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.788 [2024-10-01 06:09:00.353635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.353699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:34.788 [2024-10-01 06:09:00.353718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:34.788 [2024-10-01 06:09:00.353727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.356129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.356169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:34.788 [2024-10-01 06:09:00.356182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.384 ms 00:16:34.788 [2024-10-01 06:09:00.356190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.356261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:34.788 [2024-10-01 06:09:00.356502] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:34.788 [2024-10-01 06:09:00.356526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.356534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:34.788 [2024-10-01 06:09:00.356545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:34.788 [2024-10-01 06:09:00.356553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.358083] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:34.788 [2024-10-01 06:09:00.360752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.360786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:34.788 [2024-10-01 06:09:00.360804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.671 ms 00:16:34.788 [2024-10-01 06:09:00.360817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.360891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.360902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:34.788 [2024-10-01 06:09:00.360911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:34.788 [2024-10-01 06:09:00.360918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.367317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.367348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:34.788 [2024-10-01 06:09:00.367357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.357 ms 00:16:34.788 [2024-10-01 06:09:00.367367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.367492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.367512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:34.788 [2024-10-01 06:09:00.367521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:34.788 [2024-10-01 06:09:00.367528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.367557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.367566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:34.788 [2024-10-01 06:09:00.367578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:34.788 [2024-10-01 06:09:00.367586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.367607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:34.788 [2024-10-01 06:09:00.369276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.369304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:34.788 [2024-10-01 06:09:00.369314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:16:34.788 [2024-10-01 06:09:00.369323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.369367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.788 [2024-10-01 06:09:00.369379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:34.788 [2024-10-01 06:09:00.369392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:34.788 [2024-10-01 06:09:00.369400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.788 [2024-10-01 06:09:00.369419] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:34.788 [2024-10-01 06:09:00.369438] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:34.788 [2024-10-01 06:09:00.369478] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:34.788 [2024-10-01 06:09:00.369497] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:34.788 [2024-10-01 06:09:00.369605] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:34.789 [2024-10-01 06:09:00.369616] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:34.789 [2024-10-01 06:09:00.369627] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:34.789 [2024-10-01 06:09:00.369638] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:34.789 [2024-10-01 06:09:00.369647] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:34.789 [2024-10-01 06:09:00.369659] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:34.789 [2024-10-01 06:09:00.369670] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:34.789 [2024-10-01 06:09:00.369678] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:34.789 [2024-10-01 06:09:00.369685] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:34.789 [2024-10-01 06:09:00.369693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.789 [2024-10-01 06:09:00.369703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:34.789 [2024-10-01 06:09:00.369713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:16:34.789 [2024-10-01 06:09:00.369723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.789 [2024-10-01 06:09:00.369810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.789 [2024-10-01 06:09:00.369825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:34.789 [2024-10-01 06:09:00.369832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:34.789 [2024-10-01 06:09:00.369840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.789 [2024-10-01 06:09:00.369951] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:34.789 [2024-10-01 06:09:00.369968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:34.789 [2024-10-01 06:09:00.369978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.789 [2024-10-01 06:09:00.369990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:34.789 [2024-10-01 06:09:00.370008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:34.789 [2024-10-01 06:09:00.370026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:34.789 [2024-10-01 06:09:00.370037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.789 [2024-10-01 06:09:00.370054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:34.789 [2024-10-01 06:09:00.370061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:34.789 [2024-10-01 06:09:00.370069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.789 [2024-10-01 06:09:00.370078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:34.789 [2024-10-01 06:09:00.370086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:34.789 [2024-10-01 06:09:00.370094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:34.789 [2024-10-01 06:09:00.370110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:34.789 [2024-10-01 06:09:00.370117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:34.789 [2024-10-01 06:09:00.370134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.789 [2024-10-01 06:09:00.370150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:34.789 [2024-10-01 06:09:00.370158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.789 [2024-10-01 06:09:00.370180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:34.789 [2024-10-01 06:09:00.370188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.789 [2024-10-01 06:09:00.370203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:34.789 [2024-10-01 06:09:00.370211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.789 [2024-10-01 06:09:00.370227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:34.789 [2024-10-01 06:09:00.370234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.789 [2024-10-01 06:09:00.370250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:34.789 [2024-10-01 06:09:00.370258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:34.789 [2024-10-01 06:09:00.370265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.789 [2024-10-01 06:09:00.370273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:34.789 [2024-10-01 06:09:00.370281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:34.789 [2024-10-01 06:09:00.370290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:34.789 [2024-10-01 06:09:00.370308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:34.789 [2024-10-01 06:09:00.370315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370321] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:34.789 [2024-10-01 06:09:00.370329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:34.789 [2024-10-01 06:09:00.370336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.789 [2024-10-01 06:09:00.370343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.789 [2024-10-01 06:09:00.370351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:34.789 [2024-10-01 06:09:00.370358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:34.789 [2024-10-01 06:09:00.370364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:34.789 [2024-10-01 06:09:00.370371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:34.789 [2024-10-01 06:09:00.370377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:34.789 [2024-10-01 06:09:00.370384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:34.789 [2024-10-01 06:09:00.370393] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:34.789 [2024-10-01 06:09:00.370405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.789 [2024-10-01 06:09:00.370414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:34.789 [2024-10-01 06:09:00.370423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:34.789 [2024-10-01 06:09:00.370431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:34.789 [2024-10-01 06:09:00.370438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:34.789 [2024-10-01 06:09:00.370446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:34.789 [2024-10-01 06:09:00.370453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:34.789 [2024-10-01 06:09:00.370461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:34.789 [2024-10-01 06:09:00.370473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:34.789 [2024-10-01 06:09:00.370481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:34.789 [2024-10-01 06:09:00.370488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:34.789 [2024-10-01 06:09:00.370495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:34.789 [2024-10-01 06:09:00.370502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:34.789 [2024-10-01 06:09:00.370508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:34.789 [2024-10-01 06:09:00.370516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:34.789 [2024-10-01 06:09:00.370523] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:34.789 [2024-10-01 06:09:00.370531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.789 [2024-10-01 06:09:00.370543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:34.789 [2024-10-01 06:09:00.370553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:34.790 [2024-10-01 06:09:00.370560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:34.790 [2024-10-01 06:09:00.370567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:34.790 [2024-10-01 06:09:00.370575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.790 [2024-10-01 06:09:00.370583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:34.790 [2024-10-01 06:09:00.370592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:16:34.790 [2024-10-01 06:09:00.370600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.790 [2024-10-01 06:09:00.392640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.790 [2024-10-01 06:09:00.392719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:34.790 [2024-10-01 06:09:00.392749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.983 ms 00:16:34.790 [2024-10-01 06:09:00.392768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.790 [2024-10-01 06:09:00.393088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.790 [2024-10-01 06:09:00.393126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:34.790 [2024-10-01 06:09:00.393146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:16:34.790 [2024-10-01 06:09:00.393170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.048 [2024-10-01 06:09:00.404877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.048 [2024-10-01 06:09:00.404916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.048 [2024-10-01 06:09:00.404926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.647 ms 00:16:35.048 [2024-10-01 06:09:00.404938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.048 [2024-10-01 06:09:00.405009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.048 [2024-10-01 06:09:00.405019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.048 [2024-10-01 06:09:00.405031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:35.048 [2024-10-01 06:09:00.405038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.048 [2024-10-01 06:09:00.405453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.048 [2024-10-01 06:09:00.405475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.048 [2024-10-01 06:09:00.405484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:16:35.048 [2024-10-01 06:09:00.405491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.048 [2024-10-01 06:09:00.405635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.048 [2024-10-01 06:09:00.405652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.049 [2024-10-01 06:09:00.405666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:35.049 [2024-10-01 06:09:00.405677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.411748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.411778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.049 [2024-10-01 06:09:00.411788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.047 ms 00:16:35.049 [2024-10-01 06:09:00.411797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.414431] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:35.049 [2024-10-01 06:09:00.414468] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:35.049 [2024-10-01 06:09:00.414480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.414489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:35.049 [2024-10-01 06:09:00.414498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:16:35.049 [2024-10-01 06:09:00.414505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.429171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.429211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:35.049 [2024-10-01 06:09:00.429224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.620 ms 00:16:35.049 [2024-10-01 06:09:00.429233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.430901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.430931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:35.049 [2024-10-01 06:09:00.430940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:16:35.049 [2024-10-01 06:09:00.430948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.432402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.432431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:35.049 [2024-10-01 06:09:00.432447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:16:35.049 [2024-10-01 06:09:00.432454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.432777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.432794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:35.049 [2024-10-01 06:09:00.432807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:16:35.049 [2024-10-01 06:09:00.432817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.450814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.450884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:35.049 [2024-10-01 06:09:00.450898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.973 ms 00:16:35.049 [2024-10-01 06:09:00.450907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.458625] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:35.049 [2024-10-01 06:09:00.475604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.475647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:35.049 [2024-10-01 06:09:00.475661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.591 ms 00:16:35.049 [2024-10-01 06:09:00.475669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.475771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.475782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:35.049 [2024-10-01 06:09:00.475792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:35.049 [2024-10-01 06:09:00.475807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.475884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.475895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:35.049 [2024-10-01 06:09:00.475903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:35.049 [2024-10-01 06:09:00.475911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.475938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.475947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:35.049 [2024-10-01 06:09:00.475955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:35.049 [2024-10-01 06:09:00.475963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.475995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:35.049 [2024-10-01 06:09:00.476006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.476017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:35.049 [2024-10-01 06:09:00.476025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:35.049 [2024-10-01 06:09:00.476032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.479770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.479807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:35.049 [2024-10-01 06:09:00.479818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.715 ms 00:16:35.049 [2024-10-01 06:09:00.479826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.479929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.049 [2024-10-01 06:09:00.479943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:35.049 [2024-10-01 06:09:00.479952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:35.049 [2024-10-01 06:09:00.479960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.049 [2024-10-01 06:09:00.480878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:35.049 [2024-10-01 06:09:00.481884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.908 ms, result 0 00:16:35.049 [2024-10-01 06:09:00.482612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.049 [2024-10-01 06:09:00.492305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:41.121  Copying: 44/256 [MB] (44 MBps) Copying: 86/256 [MB] (41 MBps) Copying: 127/256 [MB] (41 MBps) Copying: 170/256 [MB] (42 MBps) Copying: 213/256 [MB] (42 MBps) Copying: 254/256 [MB] (41 MBps) Copying: 256/256 [MB] (average 42 MBps)[2024-10-01 06:09:06.520229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:41.121 [2024-10-01 06:09:06.521653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.121 [2024-10-01 06:09:06.521693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:41.121 [2024-10-01 06:09:06.521707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:41.121 [2024-10-01 06:09:06.521720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.121 [2024-10-01 06:09:06.521742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:41.121 [2024-10-01 06:09:06.522303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.121 [2024-10-01 06:09:06.522329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:41.121 [2024-10-01 06:09:06.522339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:16:41.121 [2024-10-01 06:09:06.522348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.121 [2024-10-01 06:09:06.522605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.121 [2024-10-01 06:09:06.522625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:41.121 [2024-10-01 06:09:06.522634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:16:41.121 [2024-10-01 06:09:06.522647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.121 [2024-10-01 06:09:06.526573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.121 [2024-10-01 06:09:06.526598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:41.121 [2024-10-01 06:09:06.526609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.908 ms 00:16:41.121 [2024-10-01 06:09:06.526618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.121 [2024-10-01 06:09:06.533552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.121 [2024-10-01 06:09:06.533581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:41.121 [2024-10-01 06:09:06.533592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.917 ms 00:16:41.121 [2024-10-01 06:09:06.533600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.121 [2024-10-01 06:09:06.535539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.121 [2024-10-01 06:09:06.535572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:41.121 [2024-10-01 06:09:06.535581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:16:41.121 [2024-10-01 06:09:06.535588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.121 [2024-10-01 06:09:06.539464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.121 [2024-10-01 06:09:06.539502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:41.121 [2024-10-01 06:09:06.539520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.854 ms 00:16:41.121 [2024-10-01 06:09:06.539529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.122 [2024-10-01 06:09:06.539652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.122 [2024-10-01 06:09:06.539667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:41.122 [2024-10-01 06:09:06.539676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:41.122 [2024-10-01 06:09:06.539685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.122 [2024-10-01 06:09:06.541379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.122 [2024-10-01 06:09:06.541408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:41.122 [2024-10-01 06:09:06.541416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:16:41.122 [2024-10-01 06:09:06.541424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.122 [2024-10-01 06:09:06.542900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.122 [2024-10-01 06:09:06.542928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:41.122 [2024-10-01 06:09:06.542937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:16:41.122 [2024-10-01 06:09:06.542943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.122 [2024-10-01 06:09:06.544140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.122 [2024-10-01 06:09:06.544170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:41.122 [2024-10-01 06:09:06.544178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:16:41.122 [2024-10-01 06:09:06.544185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.122 [2024-10-01 06:09:06.545228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.122 [2024-10-01 06:09:06.545258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:41.122 [2024-10-01 06:09:06.545266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:16:41.122 [2024-10-01 06:09:06.545274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.122 [2024-10-01 06:09:06.545293] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:41.122 [2024-10-01 06:09:06.545313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:41.122 [2024-10-01 06:09:06.545823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.545995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:41.123 [2024-10-01 06:09:06.546110] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:41.123 [2024-10-01 06:09:06.546118] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f441fd4a-13ea-4411-bf3b-066c792337e7 00:16:41.123 [2024-10-01 06:09:06.546126] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:41.123 [2024-10-01 06:09:06.546134] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:41.123 [2024-10-01 06:09:06.546141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:41.123 [2024-10-01 06:09:06.546150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:41.123 [2024-10-01 06:09:06.546157] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:41.123 [2024-10-01 06:09:06.546165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:41.123 [2024-10-01 06:09:06.546173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:41.123 [2024-10-01 06:09:06.546179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:41.123 [2024-10-01 06:09:06.546186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:41.123 [2024-10-01 06:09:06.546192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.123 [2024-10-01 06:09:06.546199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:41.123 [2024-10-01 06:09:06.546209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:16:41.123 [2024-10-01 06:09:06.546216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.548066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.123 [2024-10-01 06:09:06.548089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:41.123 [2024-10-01 06:09:06.548099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:16:41.123 [2024-10-01 06:09:06.548107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.548203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.123 [2024-10-01 06:09:06.548224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:41.123 [2024-10-01 06:09:06.548232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:41.123 [2024-10-01 06:09:06.548247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.553973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.554010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.123 [2024-10-01 06:09:06.554021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.554030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.554145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.554169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.123 [2024-10-01 06:09:06.554178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.554185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.554228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.554238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.123 [2024-10-01 06:09:06.554246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.554253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.554272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.554280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.123 [2024-10-01 06:09:06.554290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.554298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.565675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.565730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.123 [2024-10-01 06:09:06.565742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.565750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.574725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.574781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.123 [2024-10-01 06:09:06.574792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.574800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.574836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.574858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.123 [2024-10-01 06:09:06.574874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.574885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.574916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.574925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.123 [2024-10-01 06:09:06.574933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.574943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.575012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.575021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.123 [2024-10-01 06:09:06.575029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.575037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.575067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.575077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:41.123 [2024-10-01 06:09:06.575085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.575093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.575148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.123 [2024-10-01 06:09:06.575158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.123 [2024-10-01 06:09:06.575166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.123 [2024-10-01 06:09:06.575173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.123 [2024-10-01 06:09:06.575226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.124 [2024-10-01 06:09:06.575237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.124 [2024-10-01 06:09:06.575250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.124 [2024-10-01 06:09:06.575261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.124 [2024-10-01 06:09:06.575407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.726 ms, result 0 00:16:41.382 00:16:41.382 00:16:41.382 06:09:06 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:41.382 06:09:06 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:41.950 06:09:07 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:41.950 [2024-10-01 06:09:07.336252] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:41.950 [2024-10-01 06:09:07.336383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85338 ] 00:16:41.950 [2024-10-01 06:09:07.470588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.950 [2024-10-01 06:09:07.513274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.210 [2024-10-01 06:09:07.614728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.210 [2024-10-01 06:09:07.614808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.210 [2024-10-01 06:09:07.769454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.769516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:42.210 [2024-10-01 06:09:07.769530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.210 [2024-10-01 06:09:07.769542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.210 [2024-10-01 06:09:07.771903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.771945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.210 [2024-10-01 06:09:07.771958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:16:42.210 [2024-10-01 06:09:07.771966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.210 [2024-10-01 06:09:07.772087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:42.210 [2024-10-01 06:09:07.772400] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:42.210 [2024-10-01 06:09:07.772430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.772438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.210 [2024-10-01 06:09:07.772450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:16:42.210 [2024-10-01 06:09:07.772457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.210 [2024-10-01 06:09:07.773911] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:42.210 [2024-10-01 06:09:07.776457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.776489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:42.210 [2024-10-01 06:09:07.776503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:16:42.210 [2024-10-01 06:09:07.776511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.210 [2024-10-01 06:09:07.776569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.776579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:42.210 [2024-10-01 06:09:07.776588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:42.210 [2024-10-01 06:09:07.776597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.210 [2024-10-01 06:09:07.782950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.782981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.210 [2024-10-01 06:09:07.782991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.317 ms 00:16:42.210 [2024-10-01 06:09:07.783002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.210 [2024-10-01 06:09:07.783135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.783156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.210 [2024-10-01 06:09:07.783165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:42.210 [2024-10-01 06:09:07.783173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.210 [2024-10-01 06:09:07.783201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.783210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:42.210 [2024-10-01 06:09:07.783221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:42.210 [2024-10-01 06:09:07.783228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.210 [2024-10-01 06:09:07.783250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:42.210 [2024-10-01 06:09:07.784900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.210 [2024-10-01 06:09:07.784930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.210 [2024-10-01 06:09:07.784938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:16:42.211 [2024-10-01 06:09:07.784947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.211 [2024-10-01 06:09:07.784983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.211 [2024-10-01 06:09:07.784995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:42.211 [2024-10-01 06:09:07.785010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:42.211 [2024-10-01 06:09:07.785018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.211 [2024-10-01 06:09:07.785036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:42.211 [2024-10-01 06:09:07.785055] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:42.211 [2024-10-01 06:09:07.785096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:42.211 [2024-10-01 06:09:07.785115] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:42.211 [2024-10-01 06:09:07.785231] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:42.211 [2024-10-01 06:09:07.785251] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:42.211 [2024-10-01 06:09:07.785262] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:42.211 [2024-10-01 06:09:07.785272] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785281] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785290] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:42.211 [2024-10-01 06:09:07.785297] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:42.211 [2024-10-01 06:09:07.785304] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:42.211 [2024-10-01 06:09:07.785312] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:42.211 [2024-10-01 06:09:07.785320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.211 [2024-10-01 06:09:07.785329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:42.211 [2024-10-01 06:09:07.785340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:16:42.211 [2024-10-01 06:09:07.785348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.211 [2024-10-01 06:09:07.785435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.211 [2024-10-01 06:09:07.785447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:42.211 [2024-10-01 06:09:07.785455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:42.211 [2024-10-01 06:09:07.785466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.211 [2024-10-01 06:09:07.785569] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:42.211 [2024-10-01 06:09:07.785580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:42.211 [2024-10-01 06:09:07.785590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:42.211 [2024-10-01 06:09:07.785622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:42.211 [2024-10-01 06:09:07.785650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.211 [2024-10-01 06:09:07.785670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:42.211 [2024-10-01 06:09:07.785679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:42.211 [2024-10-01 06:09:07.785686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.211 [2024-10-01 06:09:07.785695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:42.211 [2024-10-01 06:09:07.785703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:42.211 [2024-10-01 06:09:07.785710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:42.211 [2024-10-01 06:09:07.785727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:42.211 [2024-10-01 06:09:07.785751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:42.211 [2024-10-01 06:09:07.785775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:42.211 [2024-10-01 06:09:07.785802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:42.211 [2024-10-01 06:09:07.785825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:42.211 [2024-10-01 06:09:07.785860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.211 [2024-10-01 06:09:07.785875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:42.211 [2024-10-01 06:09:07.785883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:42.211 [2024-10-01 06:09:07.785890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.211 [2024-10-01 06:09:07.785898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:42.211 [2024-10-01 06:09:07.785908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:42.211 [2024-10-01 06:09:07.785916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:42.211 [2024-10-01 06:09:07.785934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:42.211 [2024-10-01 06:09:07.785942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785950] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:42.211 [2024-10-01 06:09:07.785961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:42.211 [2024-10-01 06:09:07.785969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.211 [2024-10-01 06:09:07.785980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.211 [2024-10-01 06:09:07.785988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:42.211 [2024-10-01 06:09:07.785995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:42.211 [2024-10-01 06:09:07.786002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:42.211 [2024-10-01 06:09:07.786009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:42.211 [2024-10-01 06:09:07.786016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:42.211 [2024-10-01 06:09:07.786023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:42.211 [2024-10-01 06:09:07.786031] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:42.211 [2024-10-01 06:09:07.786041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.211 [2024-10-01 06:09:07.786052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:42.211 [2024-10-01 06:09:07.786059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:42.211 [2024-10-01 06:09:07.786068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:42.211 [2024-10-01 06:09:07.786075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:42.211 [2024-10-01 06:09:07.786082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:42.211 [2024-10-01 06:09:07.786089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:42.211 [2024-10-01 06:09:07.786096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:42.211 [2024-10-01 06:09:07.786109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:42.211 [2024-10-01 06:09:07.786115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:42.211 [2024-10-01 06:09:07.786122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:42.211 [2024-10-01 06:09:07.786129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:42.211 [2024-10-01 06:09:07.786136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:42.211 [2024-10-01 06:09:07.786143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:42.211 [2024-10-01 06:09:07.786150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:42.211 [2024-10-01 06:09:07.786157] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:42.211 [2024-10-01 06:09:07.786167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.211 [2024-10-01 06:09:07.786175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:42.211 [2024-10-01 06:09:07.786182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:42.212 [2024-10-01 06:09:07.786192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:42.212 [2024-10-01 06:09:07.786199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:42.212 [2024-10-01 06:09:07.786207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.212 [2024-10-01 06:09:07.786214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:42.212 [2024-10-01 06:09:07.786224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:16:42.212 [2024-10-01 06:09:07.786231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.212 [2024-10-01 06:09:07.805719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.212 [2024-10-01 06:09:07.805765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.212 [2024-10-01 06:09:07.805778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.435 ms 00:16:42.212 [2024-10-01 06:09:07.805789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.212 [2024-10-01 06:09:07.806003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.212 [2024-10-01 06:09:07.806030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:42.212 [2024-10-01 06:09:07.806043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:42.212 [2024-10-01 06:09:07.806059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.212 [2024-10-01 06:09:07.817635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.212 [2024-10-01 06:09:07.817678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.212 [2024-10-01 06:09:07.817693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.543 ms 00:16:42.212 [2024-10-01 06:09:07.817705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.212 [2024-10-01 06:09:07.817790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.212 [2024-10-01 06:09:07.817814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.212 [2024-10-01 06:09:07.817826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:42.212 [2024-10-01 06:09:07.817837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.212 [2024-10-01 06:09:07.818251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.212 [2024-10-01 06:09:07.818274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.212 [2024-10-01 06:09:07.818285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:16:42.212 [2024-10-01 06:09:07.818299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.212 [2024-10-01 06:09:07.818438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.212 [2024-10-01 06:09:07.818455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.212 [2024-10-01 06:09:07.818465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:16:42.212 [2024-10-01 06:09:07.818477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.212 [2024-10-01 06:09:07.824462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.212 [2024-10-01 06:09:07.824492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.212 [2024-10-01 06:09:07.824501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.964 ms 00:16:42.212 [2024-10-01 06:09:07.824509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.827218] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:42.473 [2024-10-01 06:09:07.827256] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:42.473 [2024-10-01 06:09:07.827268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.827276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:42.473 [2024-10-01 06:09:07.827285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:16:42.473 [2024-10-01 06:09:07.827293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.842020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.842056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:42.473 [2024-10-01 06:09:07.842068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.629 ms 00:16:42.473 [2024-10-01 06:09:07.842075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.843794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.843826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:42.473 [2024-10-01 06:09:07.843835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:16:42.473 [2024-10-01 06:09:07.843843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.845332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.845364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:42.473 [2024-10-01 06:09:07.845378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:16:42.473 [2024-10-01 06:09:07.845385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.845707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.845767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:42.473 [2024-10-01 06:09:07.845777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:42.473 [2024-10-01 06:09:07.845787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.863630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.863681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:42.473 [2024-10-01 06:09:07.863694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.821 ms 00:16:42.473 [2024-10-01 06:09:07.863702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.871375] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:42.473 [2024-10-01 06:09:07.888315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.888359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:42.473 [2024-10-01 06:09:07.888371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.527 ms 00:16:42.473 [2024-10-01 06:09:07.888385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.888480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.888492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:42.473 [2024-10-01 06:09:07.888501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:42.473 [2024-10-01 06:09:07.888515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.888572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.888581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:42.473 [2024-10-01 06:09:07.888589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:42.473 [2024-10-01 06:09:07.888597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.888618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.888627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:42.473 [2024-10-01 06:09:07.888635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.473 [2024-10-01 06:09:07.888642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.888677] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:42.473 [2024-10-01 06:09:07.888689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.888700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:42.473 [2024-10-01 06:09:07.888708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:42.473 [2024-10-01 06:09:07.888715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.892679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.892716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:42.473 [2024-10-01 06:09:07.892726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.944 ms 00:16:42.473 [2024-10-01 06:09:07.892735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.892823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:07.892837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:42.473 [2024-10-01 06:09:07.892860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:42.473 [2024-10-01 06:09:07.892868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:07.893866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.473 [2024-10-01 06:09:07.894855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.007 ms, result 0 00:16:42.473 [2024-10-01 06:09:07.895803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:42.473 [2024-10-01 06:09:07.905280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.473  Copying: 4096/4096 [kB] (average 39 MBps)[2024-10-01 06:09:08.008721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:42.473 [2024-10-01 06:09:08.009660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.009697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:42.473 [2024-10-01 06:09:08.009713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:42.473 [2024-10-01 06:09:08.009721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:08.009741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:42.473 [2024-10-01 06:09:08.010302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.010326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:42.473 [2024-10-01 06:09:08.010336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:16:42.473 [2024-10-01 06:09:08.010344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:08.011869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.011900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:42.473 [2024-10-01 06:09:08.011910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.505 ms 00:16:42.473 [2024-10-01 06:09:08.011918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:08.016066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.016092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:42.473 [2024-10-01 06:09:08.016102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.128 ms 00:16:42.473 [2024-10-01 06:09:08.016110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:08.022972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.022999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:42.473 [2024-10-01 06:09:08.023008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.837 ms 00:16:42.473 [2024-10-01 06:09:08.023016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:08.024730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.024763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:42.473 [2024-10-01 06:09:08.024772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:16:42.473 [2024-10-01 06:09:08.024779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:08.028525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.028560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:42.473 [2024-10-01 06:09:08.028575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.717 ms 00:16:42.473 [2024-10-01 06:09:08.028582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:08.028704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.028720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:42.473 [2024-10-01 06:09:08.028732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:42.473 [2024-10-01 06:09:08.028740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.473 [2024-10-01 06:09:08.030919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.473 [2024-10-01 06:09:08.030949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:42.473 [2024-10-01 06:09:08.030958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:16:42.474 [2024-10-01 06:09:08.030965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.474 [2024-10-01 06:09:08.032068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.474 [2024-10-01 06:09:08.032096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:42.474 [2024-10-01 06:09:08.032105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:16:42.474 [2024-10-01 06:09:08.032112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.474 [2024-10-01 06:09:08.033361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.474 [2024-10-01 06:09:08.033390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:42.474 [2024-10-01 06:09:08.033398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:16:42.474 [2024-10-01 06:09:08.033404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.474 [2024-10-01 06:09:08.034705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.474 [2024-10-01 06:09:08.034736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:42.474 [2024-10-01 06:09:08.034744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:16:42.474 [2024-10-01 06:09:08.034751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.474 [2024-10-01 06:09:08.034779] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:42.474 [2024-10-01 06:09:08.034798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.034997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:42.474 [2024-10-01 06:09:08.035397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:42.475 [2024-10-01 06:09:08.035569] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:42.475 [2024-10-01 06:09:08.035576] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f441fd4a-13ea-4411-bf3b-066c792337e7 00:16:42.475 [2024-10-01 06:09:08.035584] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:42.475 [2024-10-01 06:09:08.035591] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:42.475 [2024-10-01 06:09:08.035598] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:42.475 [2024-10-01 06:09:08.035605] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:42.475 [2024-10-01 06:09:08.035612] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:42.475 [2024-10-01 06:09:08.035620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:42.475 [2024-10-01 06:09:08.035627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:42.475 [2024-10-01 06:09:08.035633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:42.475 [2024-10-01 06:09:08.035640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:42.475 [2024-10-01 06:09:08.035646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.475 [2024-10-01 06:09:08.035654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:42.475 [2024-10-01 06:09:08.035664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:16:42.475 [2024-10-01 06:09:08.035671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.037435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.475 [2024-10-01 06:09:08.037459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:42.475 [2024-10-01 06:09:08.037472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.748 ms 00:16:42.475 [2024-10-01 06:09:08.037481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.037574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.475 [2024-10-01 06:09:08.037586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:42.475 [2024-10-01 06:09:08.037594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:42.475 [2024-10-01 06:09:08.037602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.043217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.043256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.475 [2024-10-01 06:09:08.043265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.043274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.043342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.043356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.475 [2024-10-01 06:09:08.043363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.043370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.043410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.043426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.475 [2024-10-01 06:09:08.043434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.043441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.043459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.043475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.475 [2024-10-01 06:09:08.043485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.043493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.054869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.054907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.475 [2024-10-01 06:09:08.054919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.054928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.063763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.063815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.475 [2024-10-01 06:09:08.063826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.063834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.063988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.064005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.475 [2024-10-01 06:09:08.064013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.064021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.064052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.064061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.475 [2024-10-01 06:09:08.064069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.064079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.064152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.064168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.475 [2024-10-01 06:09:08.064176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.064183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.064213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.064222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:42.475 [2024-10-01 06:09:08.064230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.064238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.064287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.064296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.475 [2024-10-01 06:09:08.064304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.064311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.064356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.475 [2024-10-01 06:09:08.064370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.475 [2024-10-01 06:09:08.064379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.475 [2024-10-01 06:09:08.064395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.475 [2024-10-01 06:09:08.064542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.849 ms, result 0 00:16:42.734 00:16:42.734 00:16:42.734 06:09:08 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85352 00:16:42.734 06:09:08 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85352 00:16:42.734 06:09:08 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:42.734 06:09:08 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85352 ']' 00:16:42.734 06:09:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.993 06:09:08 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:42.993 06:09:08 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.993 06:09:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:42.993 06:09:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:42.993 [2024-10-01 06:09:08.413447] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:43.007 [2024-10-01 06:09:08.413561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85352 ] 00:16:43.007 [2024-10-01 06:09:08.546073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.007 [2024-10-01 06:09:08.588840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.940 06:09:09 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.940 06:09:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:43.941 06:09:09 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:43.941 [2024-10-01 06:09:09.427178] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:43.941 [2024-10-01 06:09:09.427249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:44.201 [2024-10-01 06:09:09.597599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.597655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.201 [2024-10-01 06:09:09.597669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:44.201 [2024-10-01 06:09:09.597680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.600116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.600155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.201 [2024-10-01 06:09:09.600167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.418 ms 00:16:44.201 [2024-10-01 06:09:09.600177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.600245] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.201 [2024-10-01 06:09:09.600493] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.201 [2024-10-01 06:09:09.600514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.600526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.201 [2024-10-01 06:09:09.600541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:44.201 [2024-10-01 06:09:09.600553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.602078] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:44.201 [2024-10-01 06:09:09.604798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.604834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:44.201 [2024-10-01 06:09:09.604860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.718 ms 00:16:44.201 [2024-10-01 06:09:09.604870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.604929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.604940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:44.201 [2024-10-01 06:09:09.604952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:44.201 [2024-10-01 06:09:09.604960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.611292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.611323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.201 [2024-10-01 06:09:09.611339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.279 ms 00:16:44.201 [2024-10-01 06:09:09.611347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.611482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.611495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.201 [2024-10-01 06:09:09.611506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:44.201 [2024-10-01 06:09:09.611514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.611546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.611567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.201 [2024-10-01 06:09:09.611576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:44.201 [2024-10-01 06:09:09.611587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.611612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:44.201 [2024-10-01 06:09:09.613264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.613297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.201 [2024-10-01 06:09:09.613311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:16:44.201 [2024-10-01 06:09:09.613321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.613364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.613374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.201 [2024-10-01 06:09:09.613383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:44.201 [2024-10-01 06:09:09.613393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.613414] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:44.201 [2024-10-01 06:09:09.613434] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:44.201 [2024-10-01 06:09:09.613477] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:44.201 [2024-10-01 06:09:09.613501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:44.201 [2024-10-01 06:09:09.613609] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:44.201 [2024-10-01 06:09:09.613626] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.201 [2024-10-01 06:09:09.613636] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:44.201 [2024-10-01 06:09:09.613649] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.201 [2024-10-01 06:09:09.613658] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.201 [2024-10-01 06:09:09.613670] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:44.201 [2024-10-01 06:09:09.613677] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.201 [2024-10-01 06:09:09.613686] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:44.201 [2024-10-01 06:09:09.613694] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:44.201 [2024-10-01 06:09:09.613703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.613713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.201 [2024-10-01 06:09:09.613723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:44.201 [2024-10-01 06:09:09.613730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.613822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.201 [2024-10-01 06:09:09.613836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.201 [2024-10-01 06:09:09.613857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:44.201 [2024-10-01 06:09:09.613865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.201 [2024-10-01 06:09:09.613968] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.201 [2024-10-01 06:09:09.613987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.201 [2024-10-01 06:09:09.614002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.201 [2024-10-01 06:09:09.614012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.201 [2024-10-01 06:09:09.614025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.201 [2024-10-01 06:09:09.614032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.201 [2024-10-01 06:09:09.614044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:44.202 [2024-10-01 06:09:09.614053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.202 [2024-10-01 06:09:09.614063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.202 [2024-10-01 06:09:09.614081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.202 [2024-10-01 06:09:09.614088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:44.202 [2024-10-01 06:09:09.614097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.202 [2024-10-01 06:09:09.614105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.202 [2024-10-01 06:09:09.614115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:44.202 [2024-10-01 06:09:09.614123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.202 [2024-10-01 06:09:09.614141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:44.202 [2024-10-01 06:09:09.614150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.202 [2024-10-01 06:09:09.614169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.202 [2024-10-01 06:09:09.614186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.202 [2024-10-01 06:09:09.614193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.202 [2024-10-01 06:09:09.614210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.202 [2024-10-01 06:09:09.614220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.202 [2024-10-01 06:09:09.614239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.202 [2024-10-01 06:09:09.614247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.202 [2024-10-01 06:09:09.614264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.202 [2024-10-01 06:09:09.614273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.202 [2024-10-01 06:09:09.614290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.202 [2024-10-01 06:09:09.614298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:44.202 [2024-10-01 06:09:09.614308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.202 [2024-10-01 06:09:09.614316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:44.202 [2024-10-01 06:09:09.614327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:44.202 [2024-10-01 06:09:09.614334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:44.202 [2024-10-01 06:09:09.614352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:44.202 [2024-10-01 06:09:09.614361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614367] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.202 [2024-10-01 06:09:09.614377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.202 [2024-10-01 06:09:09.614384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.202 [2024-10-01 06:09:09.614392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.202 [2024-10-01 06:09:09.614400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.202 [2024-10-01 06:09:09.614408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.202 [2024-10-01 06:09:09.614414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.202 [2024-10-01 06:09:09.614423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.202 [2024-10-01 06:09:09.614429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.202 [2024-10-01 06:09:09.614439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.202 [2024-10-01 06:09:09.614448] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.202 [2024-10-01 06:09:09.614460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.202 [2024-10-01 06:09:09.614468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:44.202 [2024-10-01 06:09:09.614478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:44.202 [2024-10-01 06:09:09.614485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:44.202 [2024-10-01 06:09:09.614494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:44.202 [2024-10-01 06:09:09.614500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:44.202 [2024-10-01 06:09:09.614509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:44.202 [2024-10-01 06:09:09.614516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:44.202 [2024-10-01 06:09:09.614525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:44.202 [2024-10-01 06:09:09.614532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:44.202 [2024-10-01 06:09:09.614540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:44.202 [2024-10-01 06:09:09.614548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:44.202 [2024-10-01 06:09:09.614556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:44.202 [2024-10-01 06:09:09.614563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:44.202 [2024-10-01 06:09:09.614574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:44.202 [2024-10-01 06:09:09.614587] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.202 [2024-10-01 06:09:09.614601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.202 [2024-10-01 06:09:09.614609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.202 [2024-10-01 06:09:09.614618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.202 [2024-10-01 06:09:09.614625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.202 [2024-10-01 06:09:09.614634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.202 [2024-10-01 06:09:09.614642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.202 [2024-10-01 06:09:09.614654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.202 [2024-10-01 06:09:09.614662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:16:44.202 [2024-10-01 06:09:09.614671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.202 [2024-10-01 06:09:09.626138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.202 [2024-10-01 06:09:09.626174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.202 [2024-10-01 06:09:09.626185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.410 ms 00:16:44.202 [2024-10-01 06:09:09.626196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.202 [2024-10-01 06:09:09.626319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.202 [2024-10-01 06:09:09.626341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:44.202 [2024-10-01 06:09:09.626356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:44.202 [2024-10-01 06:09:09.626366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.202 [2024-10-01 06:09:09.636526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.202 [2024-10-01 06:09:09.636565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.202 [2024-10-01 06:09:09.636575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.137 ms 00:16:44.202 [2024-10-01 06:09:09.636585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.202 [2024-10-01 06:09:09.636658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.202 [2024-10-01 06:09:09.636674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.202 [2024-10-01 06:09:09.636683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:44.202 [2024-10-01 06:09:09.636692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.202 [2024-10-01 06:09:09.637106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.202 [2024-10-01 06:09:09.637130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.202 [2024-10-01 06:09:09.637139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:16:44.202 [2024-10-01 06:09:09.637149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.202 [2024-10-01 06:09:09.637296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.202 [2024-10-01 06:09:09.637316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.202 [2024-10-01 06:09:09.637327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:44.202 [2024-10-01 06:09:09.637340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.202 [2024-10-01 06:09:09.656907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.202 [2024-10-01 06:09:09.656973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.202 [2024-10-01 06:09:09.656994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.540 ms 00:16:44.202 [2024-10-01 06:09:09.657018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.202 [2024-10-01 06:09:09.660345] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:44.203 [2024-10-01 06:09:09.660405] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:44.203 [2024-10-01 06:09:09.660424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.660441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:44.203 [2024-10-01 06:09:09.660455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:16:44.203 [2024-10-01 06:09:09.660469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.676528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.676584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:44.203 [2024-10-01 06:09:09.676596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.996 ms 00:16:44.203 [2024-10-01 06:09:09.676608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.678297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.678332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:44.203 [2024-10-01 06:09:09.678341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:16:44.203 [2024-10-01 06:09:09.678351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.679731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.679766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:44.203 [2024-10-01 06:09:09.679775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:16:44.203 [2024-10-01 06:09:09.679784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.680142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.680163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:44.203 [2024-10-01 06:09:09.680178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:44.203 [2024-10-01 06:09:09.680188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.698087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.698144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:44.203 [2024-10-01 06:09:09.698161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.871 ms 00:16:44.203 [2024-10-01 06:09:09.698174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.705805] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:44.203 [2024-10-01 06:09:09.722560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.722615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:44.203 [2024-10-01 06:09:09.722630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.311 ms 00:16:44.203 [2024-10-01 06:09:09.722639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.722741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.722759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:44.203 [2024-10-01 06:09:09.722770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:44.203 [2024-10-01 06:09:09.722780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.722835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.722861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:44.203 [2024-10-01 06:09:09.722876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:44.203 [2024-10-01 06:09:09.722883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.722917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.722926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:44.203 [2024-10-01 06:09:09.722938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:44.203 [2024-10-01 06:09:09.722946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.722985] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:44.203 [2024-10-01 06:09:09.722999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.723009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:44.203 [2024-10-01 06:09:09.723016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:44.203 [2024-10-01 06:09:09.723029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.726869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.726902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:44.203 [2024-10-01 06:09:09.726913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.817 ms 00:16:44.203 [2024-10-01 06:09:09.726923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.727018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.203 [2024-10-01 06:09:09.727031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:44.203 [2024-10-01 06:09:09.727039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:44.203 [2024-10-01 06:09:09.727048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.203 [2024-10-01 06:09:09.728034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.203 [2024-10-01 06:09:09.729033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.112 ms, result 0 00:16:44.203 [2024-10-01 06:09:09.730101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:44.203 Some configs were skipped because the RPC state that can call them passed over. 00:16:44.203 06:09:09 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:44.461 [2024-10-01 06:09:09.913397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.462 [2024-10-01 06:09:09.913465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:44.462 [2024-10-01 06:09:09.913498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:16:44.462 [2024-10-01 06:09:09.913506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.462 [2024-10-01 06:09:09.913542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.161 ms, result 0 00:16:44.462 true 00:16:44.462 06:09:09 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:44.722 [2024-10-01 06:09:10.088792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.088858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:44.722 [2024-10-01 06:09:10.088872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:16:44.722 [2024-10-01 06:09:10.088882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.088918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.345 ms, result 0 00:16:44.722 true 00:16:44.722 06:09:10 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85352 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85352 ']' 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85352 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85352 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.722 killing process with pid 85352 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85352' 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85352 00:16:44.722 06:09:10 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85352 00:16:44.722 [2024-10-01 06:09:10.250276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.250343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:44.722 [2024-10-01 06:09:10.250359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:44.722 [2024-10-01 06:09:10.250367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.250394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:44.722 [2024-10-01 06:09:10.250954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.250981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:44.722 [2024-10-01 06:09:10.250991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:16:44.722 [2024-10-01 06:09:10.251000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.251311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.251331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:44.722 [2024-10-01 06:09:10.251340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:16:44.722 [2024-10-01 06:09:10.251355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.255453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.255487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:44.722 [2024-10-01 06:09:10.255497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.077 ms 00:16:44.722 [2024-10-01 06:09:10.255506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.262479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.262517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:44.722 [2024-10-01 06:09:10.262527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.940 ms 00:16:44.722 [2024-10-01 06:09:10.262538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.264033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.264069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:44.722 [2024-10-01 06:09:10.264078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:16:44.722 [2024-10-01 06:09:10.264088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.268101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.268136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:44.722 [2024-10-01 06:09:10.268146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.978 ms 00:16:44.722 [2024-10-01 06:09:10.268156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.268286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.268305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:44.722 [2024-10-01 06:09:10.268314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:16:44.722 [2024-10-01 06:09:10.268324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.722 [2024-10-01 06:09:10.270305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.722 [2024-10-01 06:09:10.270338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:44.722 [2024-10-01 06:09:10.270347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.962 ms 00:16:44.722 [2024-10-01 06:09:10.270360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.723 [2024-10-01 06:09:10.272048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.723 [2024-10-01 06:09:10.272081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:44.723 [2024-10-01 06:09:10.272089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:16:44.723 [2024-10-01 06:09:10.272099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.723 [2024-10-01 06:09:10.273029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.723 [2024-10-01 06:09:10.273060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:44.723 [2024-10-01 06:09:10.273069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:16:44.723 [2024-10-01 06:09:10.273078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.723 [2024-10-01 06:09:10.274133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.723 [2024-10-01 06:09:10.274166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:44.723 [2024-10-01 06:09:10.274175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:16:44.723 [2024-10-01 06:09:10.274184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.723 [2024-10-01 06:09:10.274215] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:44.723 [2024-10-01 06:09:10.274232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:44.723 [2024-10-01 06:09:10.274920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.274997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:44.724 [2024-10-01 06:09:10.275109] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:44.724 [2024-10-01 06:09:10.275117] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f441fd4a-13ea-4411-bf3b-066c792337e7 00:16:44.724 [2024-10-01 06:09:10.275127] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:44.724 [2024-10-01 06:09:10.275134] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:44.724 [2024-10-01 06:09:10.275143] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:44.724 [2024-10-01 06:09:10.275153] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:44.724 [2024-10-01 06:09:10.275161] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:44.724 [2024-10-01 06:09:10.275169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:44.724 [2024-10-01 06:09:10.275178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:44.724 [2024-10-01 06:09:10.275184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:44.724 [2024-10-01 06:09:10.275192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:44.724 [2024-10-01 06:09:10.275199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.724 [2024-10-01 06:09:10.275211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:44.724 [2024-10-01 06:09:10.275219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:16:44.724 [2024-10-01 06:09:10.275231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.277036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.724 [2024-10-01 06:09:10.277068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:44.724 [2024-10-01 06:09:10.277079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:16:44.724 [2024-10-01 06:09:10.277089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.277185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.724 [2024-10-01 06:09:10.277197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:44.724 [2024-10-01 06:09:10.277222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:44.724 [2024-10-01 06:09:10.277233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.283550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.283598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.724 [2024-10-01 06:09:10.283609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.283618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.283713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.283750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.724 [2024-10-01 06:09:10.283763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.283775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.283821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.283832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.724 [2024-10-01 06:09:10.283856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.283866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.283886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.283899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.724 [2024-10-01 06:09:10.283907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.283916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.295436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.295490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.724 [2024-10-01 06:09:10.295502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.295511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.304295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.304350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.724 [2024-10-01 06:09:10.304362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.304374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.304430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.304449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.724 [2024-10-01 06:09:10.304457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.304469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.304507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.304517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.724 [2024-10-01 06:09:10.304525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.304534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.304610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.304628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.724 [2024-10-01 06:09:10.304637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.304646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.304679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.304690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:44.724 [2024-10-01 06:09:10.304698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.304709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.304750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.304764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.724 [2024-10-01 06:09:10.304771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.304783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.304833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.724 [2024-10-01 06:09:10.304859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.724 [2024-10-01 06:09:10.304868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.724 [2024-10-01 06:09:10.304877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.724 [2024-10-01 06:09:10.305025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.720 ms, result 0 00:16:44.983 06:09:10 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.983 [2024-10-01 06:09:10.586289] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:44.983 [2024-10-01 06:09:10.586385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85388 ] 00:16:45.242 [2024-10-01 06:09:10.720773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.242 [2024-10-01 06:09:10.763708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.501 [2024-10-01 06:09:10.865544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.501 [2024-10-01 06:09:10.865625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.501 [2024-10-01 06:09:11.020197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.501 [2024-10-01 06:09:11.020262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:45.501 [2024-10-01 06:09:11.020277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.501 [2024-10-01 06:09:11.020286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.501 [2024-10-01 06:09:11.022638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.022683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.502 [2024-10-01 06:09:11.022697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:16:45.502 [2024-10-01 06:09:11.022705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.022778] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:45.502 [2024-10-01 06:09:11.023034] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:45.502 [2024-10-01 06:09:11.023056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.023065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.502 [2024-10-01 06:09:11.023077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:16:45.502 [2024-10-01 06:09:11.023085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.024656] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:45.502 [2024-10-01 06:09:11.027257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.027292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:45.502 [2024-10-01 06:09:11.027310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.601 ms 00:16:45.502 [2024-10-01 06:09:11.027318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.027379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.027389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:45.502 [2024-10-01 06:09:11.027397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:45.502 [2024-10-01 06:09:11.027405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.033870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.033903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.502 [2024-10-01 06:09:11.033913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.424 ms 00:16:45.502 [2024-10-01 06:09:11.033921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.034056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.034074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.502 [2024-10-01 06:09:11.034083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:45.502 [2024-10-01 06:09:11.034090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.034122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.034131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:45.502 [2024-10-01 06:09:11.034142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:45.502 [2024-10-01 06:09:11.034150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.034173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:45.502 [2024-10-01 06:09:11.035796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.035825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.502 [2024-10-01 06:09:11.035834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:16:45.502 [2024-10-01 06:09:11.035842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.035891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.035903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:45.502 [2024-10-01 06:09:11.035914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:45.502 [2024-10-01 06:09:11.035922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.035942] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:45.502 [2024-10-01 06:09:11.035960] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:45.502 [2024-10-01 06:09:11.036004] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:45.502 [2024-10-01 06:09:11.036029] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:45.502 [2024-10-01 06:09:11.036136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:45.502 [2024-10-01 06:09:11.036152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:45.502 [2024-10-01 06:09:11.036163] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:45.502 [2024-10-01 06:09:11.036173] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036182] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036190] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:45.502 [2024-10-01 06:09:11.036202] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:45.502 [2024-10-01 06:09:11.036209] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:45.502 [2024-10-01 06:09:11.036216] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:45.502 [2024-10-01 06:09:11.036224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.036234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:45.502 [2024-10-01 06:09:11.036243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:16:45.502 [2024-10-01 06:09:11.036250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.036340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.502 [2024-10-01 06:09:11.036354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:45.502 [2024-10-01 06:09:11.036361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:45.502 [2024-10-01 06:09:11.036369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.502 [2024-10-01 06:09:11.036470] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:45.502 [2024-10-01 06:09:11.036486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:45.502 [2024-10-01 06:09:11.036495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:45.502 [2024-10-01 06:09:11.036524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:45.502 [2024-10-01 06:09:11.036551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.502 [2024-10-01 06:09:11.036571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:45.502 [2024-10-01 06:09:11.036579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:45.502 [2024-10-01 06:09:11.036587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.502 [2024-10-01 06:09:11.036595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:45.502 [2024-10-01 06:09:11.036603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:45.502 [2024-10-01 06:09:11.036610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:45.502 [2024-10-01 06:09:11.036626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:45.502 [2024-10-01 06:09:11.036649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:45.502 [2024-10-01 06:09:11.036672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:45.502 [2024-10-01 06:09:11.036700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:45.502 [2024-10-01 06:09:11.036723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.502 [2024-10-01 06:09:11.036738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:45.502 [2024-10-01 06:09:11.036746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.502 [2024-10-01 06:09:11.036761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:45.502 [2024-10-01 06:09:11.036769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:45.502 [2024-10-01 06:09:11.036777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.502 [2024-10-01 06:09:11.036784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:45.502 [2024-10-01 06:09:11.036793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:45.502 [2024-10-01 06:09:11.036799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:45.502 [2024-10-01 06:09:11.036814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:45.502 [2024-10-01 06:09:11.036822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.502 [2024-10-01 06:09:11.036829] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:45.502 [2024-10-01 06:09:11.036836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:45.503 [2024-10-01 06:09:11.036866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.503 [2024-10-01 06:09:11.036875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.503 [2024-10-01 06:09:11.036882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:45.503 [2024-10-01 06:09:11.036890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:45.503 [2024-10-01 06:09:11.036896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:45.503 [2024-10-01 06:09:11.036903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:45.503 [2024-10-01 06:09:11.036909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:45.503 [2024-10-01 06:09:11.036916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:45.503 [2024-10-01 06:09:11.036924] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:45.503 [2024-10-01 06:09:11.036934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.503 [2024-10-01 06:09:11.036943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:45.503 [2024-10-01 06:09:11.036950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:45.503 [2024-10-01 06:09:11.036960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:45.503 [2024-10-01 06:09:11.036968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:45.503 [2024-10-01 06:09:11.036975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:45.503 [2024-10-01 06:09:11.036982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:45.503 [2024-10-01 06:09:11.036989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:45.503 [2024-10-01 06:09:11.037002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:45.503 [2024-10-01 06:09:11.037009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:45.503 [2024-10-01 06:09:11.037017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:45.503 [2024-10-01 06:09:11.037025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:45.503 [2024-10-01 06:09:11.037032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:45.503 [2024-10-01 06:09:11.037039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:45.503 [2024-10-01 06:09:11.037046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:45.503 [2024-10-01 06:09:11.037053] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:45.503 [2024-10-01 06:09:11.037061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.503 [2024-10-01 06:09:11.037069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:45.503 [2024-10-01 06:09:11.037077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:45.503 [2024-10-01 06:09:11.037086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:45.503 [2024-10-01 06:09:11.037093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:45.503 [2024-10-01 06:09:11.037102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.037109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:45.503 [2024-10-01 06:09:11.037119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:16:45.503 [2024-10-01 06:09:11.037126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.057730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.057776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.503 [2024-10-01 06:09:11.057794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.552 ms 00:16:45.503 [2024-10-01 06:09:11.057803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.057969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.057987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.503 [2024-10-01 06:09:11.057997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:45.503 [2024-10-01 06:09:11.058008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.068156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.068193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.503 [2024-10-01 06:09:11.068204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.125 ms 00:16:45.503 [2024-10-01 06:09:11.068214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.068289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.068304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.503 [2024-10-01 06:09:11.068315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:45.503 [2024-10-01 06:09:11.068324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.068733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.068757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.503 [2024-10-01 06:09:11.068768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:16:45.503 [2024-10-01 06:09:11.068776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.068944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.068962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.503 [2024-10-01 06:09:11.068973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:16:45.503 [2024-10-01 06:09:11.068986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.075216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.075247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.503 [2024-10-01 06:09:11.075257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.205 ms 00:16:45.503 [2024-10-01 06:09:11.075264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.077979] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:45.503 [2024-10-01 06:09:11.078019] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:45.503 [2024-10-01 06:09:11.078030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.078038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:45.503 [2024-10-01 06:09:11.078046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:16:45.503 [2024-10-01 06:09:11.078054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.092533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.092567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:45.503 [2024-10-01 06:09:11.092580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.435 ms 00:16:45.503 [2024-10-01 06:09:11.092588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.094396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.094427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:45.503 [2024-10-01 06:09:11.094435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.739 ms 00:16:45.503 [2024-10-01 06:09:11.094442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.095831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.095874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:45.503 [2024-10-01 06:09:11.095890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:16:45.503 [2024-10-01 06:09:11.095897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.096509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.096550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:45.503 [2024-10-01 06:09:11.096562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:45.503 [2024-10-01 06:09:11.096578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.503 [2024-10-01 06:09:11.114046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.503 [2024-10-01 06:09:11.114100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:45.503 [2024-10-01 06:09:11.114114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.439 ms 00:16:45.503 [2024-10-01 06:09:11.114122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.762 [2024-10-01 06:09:11.121740] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:45.762 [2024-10-01 06:09:11.139046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.762 [2024-10-01 06:09:11.139101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:45.762 [2024-10-01 06:09:11.139115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.824 ms 00:16:45.762 [2024-10-01 06:09:11.139131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.762 [2024-10-01 06:09:11.139246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.762 [2024-10-01 06:09:11.139257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:45.762 [2024-10-01 06:09:11.139270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:45.762 [2024-10-01 06:09:11.139285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.762 [2024-10-01 06:09:11.139348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.762 [2024-10-01 06:09:11.139357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:45.762 [2024-10-01 06:09:11.139369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:45.762 [2024-10-01 06:09:11.139380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.762 [2024-10-01 06:09:11.139403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.762 [2024-10-01 06:09:11.139411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:45.763 [2024-10-01 06:09:11.139419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:45.763 [2024-10-01 06:09:11.139427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.763 [2024-10-01 06:09:11.139462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:45.763 [2024-10-01 06:09:11.139472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.763 [2024-10-01 06:09:11.139486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:45.763 [2024-10-01 06:09:11.139494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:45.763 [2024-10-01 06:09:11.139502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.763 [2024-10-01 06:09:11.143016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.763 [2024-10-01 06:09:11.143052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:45.763 [2024-10-01 06:09:11.143064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.494 ms 00:16:45.763 [2024-10-01 06:09:11.143073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.763 [2024-10-01 06:09:11.143154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.763 [2024-10-01 06:09:11.143168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:45.763 [2024-10-01 06:09:11.143178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:45.763 [2024-10-01 06:09:11.143191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.763 [2024-10-01 06:09:11.144124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.763 [2024-10-01 06:09:11.145099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.621 ms, result 0 00:16:45.763 [2024-10-01 06:09:11.145814] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.763 [2024-10-01 06:09:11.155873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.853  Copying: 43/256 [MB] (43 MBps) Copying: 87/256 [MB] (43 MBps) Copying: 126/256 [MB] (38 MBps) Copying: 150/256 [MB] (24 MBps) Copying: 171/256 [MB] (21 MBps) Copying: 192/256 [MB] (20 MBps) Copying: 213/256 [MB] (21 MBps) Copying: 235/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 29 MBps)[2024-10-01 06:09:20.341145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.853 [2024-10-01 06:09:20.343327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.343375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:54.853 [2024-10-01 06:09:20.343391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:54.853 [2024-10-01 06:09:20.343406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.343430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:54.853 [2024-10-01 06:09:20.344291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.344331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:54.853 [2024-10-01 06:09:20.344350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:16:54.853 [2024-10-01 06:09:20.344359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.344625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.344641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:54.853 [2024-10-01 06:09:20.344650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:16:54.853 [2024-10-01 06:09:20.344658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.347433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.347460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:54.853 [2024-10-01 06:09:20.347470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:16:54.853 [2024-10-01 06:09:20.347477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.353404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.353439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:54.853 [2024-10-01 06:09:20.353449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.909 ms 00:16:54.853 [2024-10-01 06:09:20.353456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.355700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.355745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:54.853 [2024-10-01 06:09:20.355755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:16:54.853 [2024-10-01 06:09:20.355762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.360390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.360433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:54.853 [2024-10-01 06:09:20.360450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.588 ms 00:16:54.853 [2024-10-01 06:09:20.360458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.360654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.360682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:54.853 [2024-10-01 06:09:20.360692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:54.853 [2024-10-01 06:09:20.360698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.363413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-10-01 06:09:20.363449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:54.853 [2024-10-01 06:09:20.363458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.699 ms 00:16:54.853 [2024-10-01 06:09:20.363465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-10-01 06:09:20.365146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.854 [2024-10-01 06:09:20.365183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:54.854 [2024-10-01 06:09:20.365191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:16:54.854 [2024-10-01 06:09:20.365197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.854 [2024-10-01 06:09:20.367044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.854 [2024-10-01 06:09:20.367077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:54.854 [2024-10-01 06:09:20.367086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.802 ms 00:16:54.854 [2024-10-01 06:09:20.367092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.854 [2024-10-01 06:09:20.368510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.854 [2024-10-01 06:09:20.368546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:54.854 [2024-10-01 06:09:20.368555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:16:54.854 [2024-10-01 06:09:20.368562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.854 [2024-10-01 06:09:20.368594] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:54.854 [2024-10-01 06:09:20.368614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.368996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:54.854 [2024-10-01 06:09:20.369151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:54.855 [2024-10-01 06:09:20.369309] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:54.855 [2024-10-01 06:09:20.369316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f441fd4a-13ea-4411-bf3b-066c792337e7 00:16:54.855 [2024-10-01 06:09:20.369323] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:54.855 [2024-10-01 06:09:20.369330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:54.855 [2024-10-01 06:09:20.369337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:54.855 [2024-10-01 06:09:20.369344] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:54.855 [2024-10-01 06:09:20.369351] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:54.855 [2024-10-01 06:09:20.369358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:54.855 [2024-10-01 06:09:20.369368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:54.855 [2024-10-01 06:09:20.369374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:54.855 [2024-10-01 06:09:20.369380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:54.855 [2024-10-01 06:09:20.369387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.855 [2024-10-01 06:09:20.369394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:54.855 [2024-10-01 06:09:20.369404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:16:54.855 [2024-10-01 06:09:20.369413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.371928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.855 [2024-10-01 06:09:20.371955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:54.855 [2024-10-01 06:09:20.371964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:16:54.855 [2024-10-01 06:09:20.371972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.372101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.855 [2024-10-01 06:09:20.372120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:54.855 [2024-10-01 06:09:20.372128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:16:54.855 [2024-10-01 06:09:20.372135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.379967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.379999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.855 [2024-10-01 06:09:20.380008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.380016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.380088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.380107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.855 [2024-10-01 06:09:20.380114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.380121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.380162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.380170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.855 [2024-10-01 06:09:20.380177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.380184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.380201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.380208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.855 [2024-10-01 06:09:20.380218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.380224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.394516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.394561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.855 [2024-10-01 06:09:20.394571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.394586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.404488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.404536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.855 [2024-10-01 06:09:20.404546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.404552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.404582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.404590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.855 [2024-10-01 06:09:20.404601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.404609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.404654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.404662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.855 [2024-10-01 06:09:20.404673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.404683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.404744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.404753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.855 [2024-10-01 06:09:20.404759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.404766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.404793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.404801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:54.855 [2024-10-01 06:09:20.404809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.404815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.404875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.404883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.855 [2024-10-01 06:09:20.404890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.404896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.404943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.855 [2024-10-01 06:09:20.404952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.855 [2024-10-01 06:09:20.404959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.855 [2024-10-01 06:09:20.404968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.855 [2024-10-01 06:09:20.405105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.764 ms, result 0 00:16:55.115 00:16:55.115 00:16:55.115 06:09:20 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:55.686 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:16:55.686 06:09:21 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:55.686 06:09:21 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:16:55.686 06:09:21 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:55.686 06:09:21 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:55.686 06:09:21 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:16:55.686 06:09:21 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:55.686 06:09:21 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85352 00:16:55.686 06:09:21 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85352 ']' 00:16:55.686 06:09:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85352 00:16:55.686 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85352) - No such process 00:16:55.686 Process with pid 85352 is not found 00:16:55.686 06:09:21 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85352 is not found' 00:16:55.686 00:16:55.686 real 0m44.473s 00:16:55.686 user 1m4.447s 00:16:55.686 sys 0m5.091s 00:16:55.686 06:09:21 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.686 06:09:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:55.686 ************************************ 00:16:55.686 END TEST ftl_trim 00:16:55.686 ************************************ 00:16:55.947 06:09:21 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:16:55.947 06:09:21 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:55.947 06:09:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.947 06:09:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:55.947 ************************************ 00:16:55.947 START TEST ftl_restore 00:16:55.947 ************************************ 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:16:55.947 * Looking for test storage... 00:16:55.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.947 06:09:21 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:55.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.947 --rc genhtml_branch_coverage=1 00:16:55.947 --rc genhtml_function_coverage=1 00:16:55.947 --rc genhtml_legend=1 00:16:55.947 --rc geninfo_all_blocks=1 00:16:55.947 --rc geninfo_unexecuted_blocks=1 00:16:55.947 00:16:55.947 ' 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:55.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.947 --rc genhtml_branch_coverage=1 00:16:55.947 --rc genhtml_function_coverage=1 00:16:55.947 --rc genhtml_legend=1 00:16:55.947 --rc geninfo_all_blocks=1 00:16:55.947 --rc geninfo_unexecuted_blocks=1 00:16:55.947 00:16:55.947 ' 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:55.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.947 --rc genhtml_branch_coverage=1 00:16:55.947 --rc genhtml_function_coverage=1 00:16:55.947 --rc genhtml_legend=1 00:16:55.947 --rc geninfo_all_blocks=1 00:16:55.947 --rc geninfo_unexecuted_blocks=1 00:16:55.947 00:16:55.947 ' 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:55.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.947 --rc genhtml_branch_coverage=1 00:16:55.947 --rc genhtml_function_coverage=1 00:16:55.947 --rc genhtml_legend=1 00:16:55.947 --rc geninfo_all_blocks=1 00:16:55.947 --rc geninfo_unexecuted_blocks=1 00:16:55.947 00:16:55.947 ' 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HOiu12Eu99 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=85574 00:16:55.947 06:09:21 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 85574 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 85574 ']' 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.947 06:09:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:16:55.948 06:09:21 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.209 [2024-10-01 06:09:21.589926] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:56.209 [2024-10-01 06:09:21.590237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85574 ] 00:16:56.209 [2024-10-01 06:09:21.728141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.209 [2024-10-01 06:09:21.777087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.780 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.780 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:16:56.780 06:09:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:56.780 06:09:22 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:16:56.780 06:09:22 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:56.780 06:09:22 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:16:56.780 06:09:22 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:16:56.780 06:09:22 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:57.378 06:09:22 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:57.378 06:09:22 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:16:57.378 06:09:22 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:57.378 { 00:16:57.378 "name": "nvme0n1", 00:16:57.378 "aliases": [ 00:16:57.378 "ca5017bf-39ac-42a3-9860-bc6678974b32" 00:16:57.378 ], 00:16:57.378 "product_name": "NVMe disk", 00:16:57.378 "block_size": 4096, 00:16:57.378 "num_blocks": 1310720, 00:16:57.378 "uuid": "ca5017bf-39ac-42a3-9860-bc6678974b32", 00:16:57.378 "numa_id": -1, 00:16:57.378 "assigned_rate_limits": { 00:16:57.378 "rw_ios_per_sec": 0, 00:16:57.378 "rw_mbytes_per_sec": 0, 00:16:57.378 "r_mbytes_per_sec": 0, 00:16:57.378 "w_mbytes_per_sec": 0 00:16:57.378 }, 00:16:57.378 "claimed": true, 00:16:57.378 "claim_type": "read_many_write_one", 00:16:57.378 "zoned": false, 00:16:57.378 "supported_io_types": { 00:16:57.378 "read": true, 00:16:57.378 "write": true, 00:16:57.378 "unmap": true, 00:16:57.378 "flush": true, 00:16:57.378 "reset": true, 00:16:57.378 "nvme_admin": true, 00:16:57.378 "nvme_io": true, 00:16:57.378 "nvme_io_md": false, 00:16:57.378 "write_zeroes": true, 00:16:57.378 "zcopy": false, 00:16:57.378 "get_zone_info": false, 00:16:57.378 "zone_management": false, 00:16:57.378 "zone_append": false, 00:16:57.378 "compare": true, 00:16:57.378 "compare_and_write": false, 00:16:57.378 "abort": true, 00:16:57.378 "seek_hole": false, 00:16:57.378 "seek_data": false, 00:16:57.378 "copy": true, 00:16:57.378 "nvme_iov_md": false 00:16:57.378 }, 00:16:57.378 "driver_specific": { 00:16:57.378 "nvme": [ 00:16:57.378 { 00:16:57.378 "pci_address": "0000:00:11.0", 00:16:57.378 "trid": { 00:16:57.378 "trtype": "PCIe", 00:16:57.378 "traddr": "0000:00:11.0" 00:16:57.378 }, 00:16:57.378 "ctrlr_data": { 00:16:57.378 "cntlid": 0, 00:16:57.378 "vendor_id": "0x1b36", 00:16:57.378 "model_number": "QEMU NVMe Ctrl", 00:16:57.378 "serial_number": "12341", 00:16:57.378 "firmware_revision": "8.0.0", 00:16:57.378 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:57.378 "oacs": { 00:16:57.378 "security": 0, 00:16:57.378 "format": 1, 00:16:57.378 "firmware": 0, 00:16:57.378 "ns_manage": 1 00:16:57.378 }, 00:16:57.378 "multi_ctrlr": false, 00:16:57.378 "ana_reporting": false 00:16:57.378 }, 00:16:57.378 "vs": { 00:16:57.378 "nvme_version": "1.4" 00:16:57.378 }, 00:16:57.378 "ns_data": { 00:16:57.378 "id": 1, 00:16:57.378 "can_share": false 00:16:57.378 } 00:16:57.378 } 00:16:57.378 ], 00:16:57.378 "mp_policy": "active_passive" 00:16:57.378 } 00:16:57.378 } 00:16:57.378 ]' 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:57.378 06:09:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:16:57.378 06:09:22 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:16:57.378 06:09:22 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:57.378 06:09:22 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:16:57.378 06:09:22 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:57.378 06:09:22 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:57.639 06:09:23 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b2e79009-7e63-4ed7-8219-a8281bd8cdf7 00:16:57.639 06:09:23 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:16:57.639 06:09:23 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2e79009-7e63-4ed7-8219-a8281bd8cdf7 00:16:57.900 06:09:23 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:58.161 06:09:23 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=61922f39-1daf-4f94-8eef-8cc5826d8a61 00:16:58.162 06:09:23 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 61922f39-1daf-4f94-8eef-8cc5826d8a61 00:16:58.423 06:09:23 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=dced692d-f253-4f49-92c2-d6965d2628d8 00:16:58.423 06:09:23 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:16:58.423 06:09:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dced692d-f253-4f49-92c2-d6965d2628d8 00:16:58.423 06:09:23 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:16:58.423 06:09:23 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:58.423 06:09:23 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=dced692d-f253-4f49-92c2-d6965d2628d8 00:16:58.423 06:09:23 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:16:58.423 06:09:23 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size dced692d-f253-4f49-92c2-d6965d2628d8 00:16:58.423 06:09:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=dced692d-f253-4f49-92c2-d6965d2628d8 00:16:58.423 06:09:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:58.423 06:09:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:58.423 06:09:23 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:58.423 06:09:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dced692d-f253-4f49-92c2-d6965d2628d8 00:16:58.684 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:58.684 { 00:16:58.684 "name": "dced692d-f253-4f49-92c2-d6965d2628d8", 00:16:58.684 "aliases": [ 00:16:58.684 "lvs/nvme0n1p0" 00:16:58.684 ], 00:16:58.684 "product_name": "Logical Volume", 00:16:58.684 "block_size": 4096, 00:16:58.684 "num_blocks": 26476544, 00:16:58.684 "uuid": "dced692d-f253-4f49-92c2-d6965d2628d8", 00:16:58.684 "assigned_rate_limits": { 00:16:58.684 "rw_ios_per_sec": 0, 00:16:58.684 "rw_mbytes_per_sec": 0, 00:16:58.684 "r_mbytes_per_sec": 0, 00:16:58.684 "w_mbytes_per_sec": 0 00:16:58.685 }, 00:16:58.685 "claimed": false, 00:16:58.685 "zoned": false, 00:16:58.685 "supported_io_types": { 00:16:58.685 "read": true, 00:16:58.685 "write": true, 00:16:58.685 "unmap": true, 00:16:58.685 "flush": false, 00:16:58.685 "reset": true, 00:16:58.685 "nvme_admin": false, 00:16:58.685 "nvme_io": false, 00:16:58.685 "nvme_io_md": false, 00:16:58.685 "write_zeroes": true, 00:16:58.685 "zcopy": false, 00:16:58.685 "get_zone_info": false, 00:16:58.685 "zone_management": false, 00:16:58.685 "zone_append": false, 00:16:58.685 "compare": false, 00:16:58.685 "compare_and_write": false, 00:16:58.685 "abort": false, 00:16:58.685 "seek_hole": true, 00:16:58.685 "seek_data": true, 00:16:58.685 "copy": false, 00:16:58.685 "nvme_iov_md": false 00:16:58.685 }, 00:16:58.685 "driver_specific": { 00:16:58.685 "lvol": { 00:16:58.685 "lvol_store_uuid": "61922f39-1daf-4f94-8eef-8cc5826d8a61", 00:16:58.685 "base_bdev": "nvme0n1", 00:16:58.685 "thin_provision": true, 00:16:58.685 "num_allocated_clusters": 0, 00:16:58.685 "snapshot": false, 00:16:58.685 "clone": false, 00:16:58.685 "esnap_clone": false 00:16:58.685 } 00:16:58.685 } 00:16:58.685 } 00:16:58.685 ]' 00:16:58.685 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:58.685 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:58.685 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:58.685 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:58.685 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:58.685 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:58.685 06:09:24 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:16:58.685 06:09:24 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:16:58.685 06:09:24 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:58.946 06:09:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:58.946 06:09:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:58.946 06:09:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size dced692d-f253-4f49-92c2-d6965d2628d8 00:16:58.946 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=dced692d-f253-4f49-92c2-d6965d2628d8 00:16:58.947 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:58.947 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:58.947 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:58.947 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dced692d-f253-4f49-92c2-d6965d2628d8 00:16:59.206 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:59.206 { 00:16:59.206 "name": "dced692d-f253-4f49-92c2-d6965d2628d8", 00:16:59.206 "aliases": [ 00:16:59.206 "lvs/nvme0n1p0" 00:16:59.206 ], 00:16:59.206 "product_name": "Logical Volume", 00:16:59.206 "block_size": 4096, 00:16:59.206 "num_blocks": 26476544, 00:16:59.206 "uuid": "dced692d-f253-4f49-92c2-d6965d2628d8", 00:16:59.206 "assigned_rate_limits": { 00:16:59.206 "rw_ios_per_sec": 0, 00:16:59.206 "rw_mbytes_per_sec": 0, 00:16:59.206 "r_mbytes_per_sec": 0, 00:16:59.206 "w_mbytes_per_sec": 0 00:16:59.206 }, 00:16:59.206 "claimed": false, 00:16:59.206 "zoned": false, 00:16:59.206 "supported_io_types": { 00:16:59.206 "read": true, 00:16:59.206 "write": true, 00:16:59.206 "unmap": true, 00:16:59.206 "flush": false, 00:16:59.206 "reset": true, 00:16:59.206 "nvme_admin": false, 00:16:59.206 "nvme_io": false, 00:16:59.206 "nvme_io_md": false, 00:16:59.206 "write_zeroes": true, 00:16:59.206 "zcopy": false, 00:16:59.206 "get_zone_info": false, 00:16:59.206 "zone_management": false, 00:16:59.206 "zone_append": false, 00:16:59.206 "compare": false, 00:16:59.206 "compare_and_write": false, 00:16:59.206 "abort": false, 00:16:59.206 "seek_hole": true, 00:16:59.206 "seek_data": true, 00:16:59.206 "copy": false, 00:16:59.206 "nvme_iov_md": false 00:16:59.206 }, 00:16:59.206 "driver_specific": { 00:16:59.206 "lvol": { 00:16:59.206 "lvol_store_uuid": "61922f39-1daf-4f94-8eef-8cc5826d8a61", 00:16:59.206 "base_bdev": "nvme0n1", 00:16:59.206 "thin_provision": true, 00:16:59.206 "num_allocated_clusters": 0, 00:16:59.206 "snapshot": false, 00:16:59.206 "clone": false, 00:16:59.206 "esnap_clone": false 00:16:59.206 } 00:16:59.206 } 00:16:59.206 } 00:16:59.206 ]' 00:16:59.206 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:59.206 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:59.206 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:59.206 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:59.206 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:59.206 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:59.206 06:09:24 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:16:59.206 06:09:24 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:59.464 06:09:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:16:59.464 06:09:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size dced692d-f253-4f49-92c2-d6965d2628d8 00:16:59.464 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=dced692d-f253-4f49-92c2-d6965d2628d8 00:16:59.464 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:59.464 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:59.464 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:59.464 06:09:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dced692d-f253-4f49-92c2-d6965d2628d8 00:16:59.723 06:09:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:59.723 { 00:16:59.723 "name": "dced692d-f253-4f49-92c2-d6965d2628d8", 00:16:59.723 "aliases": [ 00:16:59.723 "lvs/nvme0n1p0" 00:16:59.723 ], 00:16:59.723 "product_name": "Logical Volume", 00:16:59.723 "block_size": 4096, 00:16:59.723 "num_blocks": 26476544, 00:16:59.723 "uuid": "dced692d-f253-4f49-92c2-d6965d2628d8", 00:16:59.723 "assigned_rate_limits": { 00:16:59.723 "rw_ios_per_sec": 0, 00:16:59.723 "rw_mbytes_per_sec": 0, 00:16:59.723 "r_mbytes_per_sec": 0, 00:16:59.723 "w_mbytes_per_sec": 0 00:16:59.723 }, 00:16:59.723 "claimed": false, 00:16:59.723 "zoned": false, 00:16:59.723 "supported_io_types": { 00:16:59.723 "read": true, 00:16:59.723 "write": true, 00:16:59.723 "unmap": true, 00:16:59.723 "flush": false, 00:16:59.723 "reset": true, 00:16:59.723 "nvme_admin": false, 00:16:59.723 "nvme_io": false, 00:16:59.723 "nvme_io_md": false, 00:16:59.723 "write_zeroes": true, 00:16:59.723 "zcopy": false, 00:16:59.723 "get_zone_info": false, 00:16:59.723 "zone_management": false, 00:16:59.723 "zone_append": false, 00:16:59.723 "compare": false, 00:16:59.723 "compare_and_write": false, 00:16:59.723 "abort": false, 00:16:59.723 "seek_hole": true, 00:16:59.723 "seek_data": true, 00:16:59.723 "copy": false, 00:16:59.723 "nvme_iov_md": false 00:16:59.723 }, 00:16:59.723 "driver_specific": { 00:16:59.723 "lvol": { 00:16:59.723 "lvol_store_uuid": "61922f39-1daf-4f94-8eef-8cc5826d8a61", 00:16:59.723 "base_bdev": "nvme0n1", 00:16:59.723 "thin_provision": true, 00:16:59.723 "num_allocated_clusters": 0, 00:16:59.723 "snapshot": false, 00:16:59.723 "clone": false, 00:16:59.723 "esnap_clone": false 00:16:59.723 } 00:16:59.723 } 00:16:59.723 } 00:16:59.723 ]' 00:16:59.723 06:09:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:59.723 06:09:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:59.723 06:09:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:59.723 06:09:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:59.723 06:09:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:59.723 06:09:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:59.723 06:09:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:16:59.723 06:09:25 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d dced692d-f253-4f49-92c2-d6965d2628d8 --l2p_dram_limit 10' 00:16:59.723 06:09:25 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:16:59.723 06:09:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:59.723 06:09:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:16:59.723 06:09:25 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:16:59.723 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:16:59.723 06:09:25 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dced692d-f253-4f49-92c2-d6965d2628d8 --l2p_dram_limit 10 -c nvc0n1p0 00:16:59.981 [2024-10-01 06:09:25.462410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.462458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:59.981 [2024-10-01 06:09:25.462471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:59.981 [2024-10-01 06:09:25.462479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.462533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.462543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.981 [2024-10-01 06:09:25.462550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:59.981 [2024-10-01 06:09:25.462559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.462577] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:59.981 [2024-10-01 06:09:25.462811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:59.981 [2024-10-01 06:09:25.462823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.462831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.981 [2024-10-01 06:09:25.462839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:16:59.981 [2024-10-01 06:09:25.462859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.462914] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 88a7bfd7-7279-4462-a23e-993b19e95361 00:16:59.981 [2024-10-01 06:09:25.464202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.464309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:59.981 [2024-10-01 06:09:25.464326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:59.981 [2024-10-01 06:09:25.464334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.471186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.471271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.981 [2024-10-01 06:09:25.471287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:16:59.981 [2024-10-01 06:09:25.471299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.471364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.471376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.981 [2024-10-01 06:09:25.471386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:59.981 [2024-10-01 06:09:25.471392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.471439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.471446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:59.981 [2024-10-01 06:09:25.471455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:59.981 [2024-10-01 06:09:25.471461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.471482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.981 [2024-10-01 06:09:25.473117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.473145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.981 [2024-10-01 06:09:25.473152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:16:59.981 [2024-10-01 06:09:25.473160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.473192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.473200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:59.981 [2024-10-01 06:09:25.473216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:59.981 [2024-10-01 06:09:25.473229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.473243] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:59.981 [2024-10-01 06:09:25.473360] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:59.981 [2024-10-01 06:09:25.473369] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:59.981 [2024-10-01 06:09:25.473380] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:59.981 [2024-10-01 06:09:25.473388] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473396] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473406] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:59.981 [2024-10-01 06:09:25.473418] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:59.981 [2024-10-01 06:09:25.473427] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:59.981 [2024-10-01 06:09:25.473435] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:59.981 [2024-10-01 06:09:25.473444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.473452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:59.981 [2024-10-01 06:09:25.473459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:16:59.981 [2024-10-01 06:09:25.473466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.473530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.981 [2024-10-01 06:09:25.473540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:59.981 [2024-10-01 06:09:25.473545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:59.981 [2024-10-01 06:09:25.473552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.981 [2024-10-01 06:09:25.473629] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:59.981 [2024-10-01 06:09:25.473639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:59.981 [2024-10-01 06:09:25.473645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:59.981 [2024-10-01 06:09:25.473669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:59.981 [2024-10-01 06:09:25.473686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.981 [2024-10-01 06:09:25.473697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:59.981 [2024-10-01 06:09:25.473705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:59.981 [2024-10-01 06:09:25.473709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.981 [2024-10-01 06:09:25.473717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:59.981 [2024-10-01 06:09:25.473722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:59.981 [2024-10-01 06:09:25.473729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:59.981 [2024-10-01 06:09:25.473740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:59.981 [2024-10-01 06:09:25.473757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:59.981 [2024-10-01 06:09:25.473776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:59.981 [2024-10-01 06:09:25.473795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:59.981 [2024-10-01 06:09:25.473818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.981 [2024-10-01 06:09:25.473831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:59.981 [2024-10-01 06:09:25.473837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:59.981 [2024-10-01 06:09:25.473860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.981 [2024-10-01 06:09:25.473868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:59.981 [2024-10-01 06:09:25.473876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:59.981 [2024-10-01 06:09:25.473882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.981 [2024-10-01 06:09:25.473889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:59.981 [2024-10-01 06:09:25.473895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:59.982 [2024-10-01 06:09:25.473904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.982 [2024-10-01 06:09:25.473910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:59.982 [2024-10-01 06:09:25.473918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:59.982 [2024-10-01 06:09:25.473924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.982 [2024-10-01 06:09:25.473931] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:59.982 [2024-10-01 06:09:25.473943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:59.982 [2024-10-01 06:09:25.473953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.982 [2024-10-01 06:09:25.473962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.982 [2024-10-01 06:09:25.473971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:59.982 [2024-10-01 06:09:25.473977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:59.982 [2024-10-01 06:09:25.473985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:59.982 [2024-10-01 06:09:25.473990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:59.982 [2024-10-01 06:09:25.473998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:59.982 [2024-10-01 06:09:25.474004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:59.982 [2024-10-01 06:09:25.474015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:59.982 [2024-10-01 06:09:25.474025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.982 [2024-10-01 06:09:25.474036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:59.982 [2024-10-01 06:09:25.474043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:59.982 [2024-10-01 06:09:25.474051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:59.982 [2024-10-01 06:09:25.474057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:59.982 [2024-10-01 06:09:25.474066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:59.982 [2024-10-01 06:09:25.474072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:59.982 [2024-10-01 06:09:25.474091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:59.982 [2024-10-01 06:09:25.474098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:59.982 [2024-10-01 06:09:25.474106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:59.982 [2024-10-01 06:09:25.474111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:59.982 [2024-10-01 06:09:25.474120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:59.982 [2024-10-01 06:09:25.474127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:59.982 [2024-10-01 06:09:25.474135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:59.982 [2024-10-01 06:09:25.474142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:59.982 [2024-10-01 06:09:25.474150] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:59.982 [2024-10-01 06:09:25.474157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.982 [2024-10-01 06:09:25.474166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:59.982 [2024-10-01 06:09:25.474172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:59.982 [2024-10-01 06:09:25.474180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:59.982 [2024-10-01 06:09:25.474187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:59.982 [2024-10-01 06:09:25.474196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.982 [2024-10-01 06:09:25.474205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:59.982 [2024-10-01 06:09:25.474215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:16:59.982 [2024-10-01 06:09:25.474224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.982 [2024-10-01 06:09:25.474255] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:59.982 [2024-10-01 06:09:25.474263] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:04.177 [2024-10-01 06:09:29.111549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.111671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:04.178 [2024-10-01 06:09:29.111700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3637.266 ms 00:17:04.178 [2024-10-01 06:09:29.111711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.131453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.131744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.178 [2024-10-01 06:09:29.131776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.568 ms 00:17:04.178 [2024-10-01 06:09:29.131786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.131978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.131991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:04.178 [2024-10-01 06:09:29.132010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:04.178 [2024-10-01 06:09:29.132019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.148085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.148144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.178 [2024-10-01 06:09:29.148162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.009 ms 00:17:04.178 [2024-10-01 06:09:29.148171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.148213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.148227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.178 [2024-10-01 06:09:29.148239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:04.178 [2024-10-01 06:09:29.148247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.149029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.149061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.178 [2024-10-01 06:09:29.149082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:17:04.178 [2024-10-01 06:09:29.149091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.149248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.149258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.178 [2024-10-01 06:09:29.149276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:04.178 [2024-10-01 06:09:29.149284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.171392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.171625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.178 [2024-10-01 06:09:29.171659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.073 ms 00:17:04.178 [2024-10-01 06:09:29.171671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.183830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:04.178 [2024-10-01 06:09:29.188882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.188933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:04.178 [2024-10-01 06:09:29.188946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.063 ms 00:17:04.178 [2024-10-01 06:09:29.188960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.283453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.283543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:04.178 [2024-10-01 06:09:29.283559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.454 ms 00:17:04.178 [2024-10-01 06:09:29.283580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.283829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.283868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:04.178 [2024-10-01 06:09:29.283881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:17:04.178 [2024-10-01 06:09:29.283892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.290117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.290315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:04.178 [2024-10-01 06:09:29.290336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.184 ms 00:17:04.178 [2024-10-01 06:09:29.290349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.295473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.295528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:04.178 [2024-10-01 06:09:29.295540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.073 ms 00:17:04.178 [2024-10-01 06:09:29.295551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.295977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.296000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:04.178 [2024-10-01 06:09:29.296012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:17:04.178 [2024-10-01 06:09:29.296027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.345948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.346008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:04.178 [2024-10-01 06:09:29.346023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.898 ms 00:17:04.178 [2024-10-01 06:09:29.346035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.354043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.354100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:04.178 [2024-10-01 06:09:29.354112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.939 ms 00:17:04.178 [2024-10-01 06:09:29.354124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.360069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.360125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:04.178 [2024-10-01 06:09:29.360135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:17:04.178 [2024-10-01 06:09:29.360146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.366509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.366568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:04.178 [2024-10-01 06:09:29.366579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.315 ms 00:17:04.178 [2024-10-01 06:09:29.366593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.366649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.366662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:04.178 [2024-10-01 06:09:29.366672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:04.178 [2024-10-01 06:09:29.366683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.366786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.178 [2024-10-01 06:09:29.366799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:04.178 [2024-10-01 06:09:29.366807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:04.178 [2024-10-01 06:09:29.366818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.178 [2024-10-01 06:09:29.368226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3905.186 ms, result 0 00:17:04.178 { 00:17:04.178 "name": "ftl0", 00:17:04.178 "uuid": "88a7bfd7-7279-4462-a23e-993b19e95361" 00:17:04.178 } 00:17:04.178 06:09:29 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:04.178 06:09:29 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:04.178 06:09:29 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:04.178 06:09:29 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:04.454 [2024-10-01 06:09:29.811429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.454 [2024-10-01 06:09:29.811503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:04.454 [2024-10-01 06:09:29.811524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:04.454 [2024-10-01 06:09:29.811534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.454 [2024-10-01 06:09:29.811570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:04.454 [2024-10-01 06:09:29.812590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.454 [2024-10-01 06:09:29.812649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:04.454 [2024-10-01 06:09:29.812663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:17:04.454 [2024-10-01 06:09:29.812675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.454 [2024-10-01 06:09:29.812986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.454 [2024-10-01 06:09:29.813005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:04.454 [2024-10-01 06:09:29.813014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:04.454 [2024-10-01 06:09:29.813026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.454 [2024-10-01 06:09:29.816316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.454 [2024-10-01 06:09:29.816345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:04.454 [2024-10-01 06:09:29.816362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:17:04.454 [2024-10-01 06:09:29.816374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.454 [2024-10-01 06:09:29.822965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.454 [2024-10-01 06:09:29.823135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:04.454 [2024-10-01 06:09:29.823552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.570 ms 00:17:04.454 [2024-10-01 06:09:29.823589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.454 [2024-10-01 06:09:29.827142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.455 [2024-10-01 06:09:29.827210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:04.455 [2024-10-01 06:09:29.827223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.409 ms 00:17:04.455 [2024-10-01 06:09:29.827233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.455 [2024-10-01 06:09:29.834388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.455 [2024-10-01 06:09:29.834448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:04.455 [2024-10-01 06:09:29.834460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.104 ms 00:17:04.455 [2024-10-01 06:09:29.834482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.455 [2024-10-01 06:09:29.834631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.455 [2024-10-01 06:09:29.834646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:04.455 [2024-10-01 06:09:29.834656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:04.455 [2024-10-01 06:09:29.834667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.455 [2024-10-01 06:09:29.838253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.455 [2024-10-01 06:09:29.838310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:04.455 [2024-10-01 06:09:29.838320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:17:04.455 [2024-10-01 06:09:29.838332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.455 [2024-10-01 06:09:29.841182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.455 [2024-10-01 06:09:29.841356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:04.455 [2024-10-01 06:09:29.841427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:17:04.455 [2024-10-01 06:09:29.841455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.455 [2024-10-01 06:09:29.843696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.455 [2024-10-01 06:09:29.843880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:04.455 [2024-10-01 06:09:29.844057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.186 ms 00:17:04.455 [2024-10-01 06:09:29.844105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.455 [2024-10-01 06:09:29.846389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.455 [2024-10-01 06:09:29.846547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:04.455 [2024-10-01 06:09:29.846616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.172 ms 00:17:04.455 [2024-10-01 06:09:29.846642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.455 [2024-10-01 06:09:29.846691] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:04.455 [2024-10-01 06:09:29.846729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.846762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.846794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.846824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.846932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.846963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.846995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.847981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.848984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.849016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.849048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.849077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.849109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:04.455 [2024-10-01 06:09:29.849139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:04.456 [2024-10-01 06:09:29.849909] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:04.456 [2024-10-01 06:09:29.849919] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88a7bfd7-7279-4462-a23e-993b19e95361 00:17:04.456 [2024-10-01 06:09:29.849931] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:04.456 [2024-10-01 06:09:29.849939] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:04.456 [2024-10-01 06:09:29.849950] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:04.456 [2024-10-01 06:09:29.849959] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:04.456 [2024-10-01 06:09:29.849972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:04.456 [2024-10-01 06:09:29.849980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:04.456 [2024-10-01 06:09:29.849997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:04.456 [2024-10-01 06:09:29.850004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:04.456 [2024-10-01 06:09:29.850013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:04.456 [2024-10-01 06:09:29.850021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.456 [2024-10-01 06:09:29.850036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:04.456 [2024-10-01 06:09:29.850046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.332 ms 00:17:04.456 [2024-10-01 06:09:29.850056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.853419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.456 [2024-10-01 06:09:29.853565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:04.456 [2024-10-01 06:09:29.853637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.336 ms 00:17:04.456 [2024-10-01 06:09:29.853664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.853928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.456 [2024-10-01 06:09:29.854033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:04.456 [2024-10-01 06:09:29.854103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:17:04.456 [2024-10-01 06:09:29.854131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.865271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.865439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.456 [2024-10-01 06:09:29.865511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.865539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.865627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.865652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.456 [2024-10-01 06:09:29.865673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.865695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.865799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.865833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.456 [2024-10-01 06:09:29.865882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.865984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.866027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.866060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.456 [2024-10-01 06:09:29.866081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.866104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.886002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.886205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.456 [2024-10-01 06:09:29.886266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.886294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.901896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.902084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.456 [2024-10-01 06:09:29.902106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.902119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.902230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.902249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.456 [2024-10-01 06:09:29.902258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.902269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.902323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.902336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.456 [2024-10-01 06:09:29.902349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.902359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.902460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.902474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.456 [2024-10-01 06:09:29.902483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.902493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.902536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.902549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:04.456 [2024-10-01 06:09:29.902564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.902578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.902632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.902648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.456 [2024-10-01 06:09:29.902658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.456 [2024-10-01 06:09:29.902670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.456 [2024-10-01 06:09:29.902735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:04.456 [2024-10-01 06:09:29.902750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.456 [2024-10-01 06:09:29.902762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:04.457 [2024-10-01 06:09:29.902775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.457 [2024-10-01 06:09:29.902985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 91.503 ms, result 0 00:17:04.457 true 00:17:04.457 06:09:29 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 85574 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 85574 ']' 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 85574 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85574 00:17:04.457 killing process with pid 85574 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85574' 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 85574 00:17:04.457 06:09:29 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 85574 00:17:09.751 06:09:34 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:13.948 262144+0 records in 00:17:13.948 262144+0 records out 00:17:13.948 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.45661 s, 241 MB/s 00:17:13.948 06:09:39 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:15.323 06:09:40 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.582 [2024-10-01 06:09:40.977097] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:17:15.582 [2024-10-01 06:09:40.977184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85802 ] 00:17:15.582 [2024-10-01 06:09:41.114792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.582 [2024-10-01 06:09:41.158157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.841 [2024-10-01 06:09:41.260620] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.841 [2024-10-01 06:09:41.260694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.841 [2024-10-01 06:09:41.417944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.841 [2024-10-01 06:09:41.417991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:15.841 [2024-10-01 06:09:41.418009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:15.841 [2024-10-01 06:09:41.418018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.841 [2024-10-01 06:09:41.418066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.841 [2024-10-01 06:09:41.418077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:15.841 [2024-10-01 06:09:41.418086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:15.841 [2024-10-01 06:09:41.418100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.841 [2024-10-01 06:09:41.418121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:15.841 [2024-10-01 06:09:41.418648] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:15.841 [2024-10-01 06:09:41.418685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.841 [2024-10-01 06:09:41.418695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:15.841 [2024-10-01 06:09:41.418710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:17:15.841 [2024-10-01 06:09:41.418718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.841 [2024-10-01 06:09:41.420121] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:15.841 [2024-10-01 06:09:41.422945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.841 [2024-10-01 06:09:41.422976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:15.841 [2024-10-01 06:09:41.422987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:17:15.841 [2024-10-01 06:09:41.422996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.841 [2024-10-01 06:09:41.423052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.841 [2024-10-01 06:09:41.423064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:15.841 [2024-10-01 06:09:41.423073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:15.841 [2024-10-01 06:09:41.423082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.841 [2024-10-01 06:09:41.429557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.841 [2024-10-01 06:09:41.429666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:15.841 [2024-10-01 06:09:41.429724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.431 ms 00:17:15.841 [2024-10-01 06:09:41.429747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.841 [2024-10-01 06:09:41.429930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.841 [2024-10-01 06:09:41.429992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:15.841 [2024-10-01 06:09:41.430095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:15.841 [2024-10-01 06:09:41.430196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.841 [2024-10-01 06:09:41.430290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.841 [2024-10-01 06:09:41.430322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:15.841 [2024-10-01 06:09:41.430388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:15.841 [2024-10-01 06:09:41.430410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.842 [2024-10-01 06:09:41.430456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:15.842 [2024-10-01 06:09:41.432195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.842 [2024-10-01 06:09:41.432290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:15.842 [2024-10-01 06:09:41.432338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:17:15.842 [2024-10-01 06:09:41.432383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.842 [2024-10-01 06:09:41.432420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.842 [2024-10-01 06:09:41.432430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:15.842 [2024-10-01 06:09:41.432439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:15.842 [2024-10-01 06:09:41.432446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.842 [2024-10-01 06:09:41.432479] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:15.842 [2024-10-01 06:09:41.432503] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:15.842 [2024-10-01 06:09:41.432546] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:15.842 [2024-10-01 06:09:41.432562] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:15.842 [2024-10-01 06:09:41.432667] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:15.842 [2024-10-01 06:09:41.432681] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:15.842 [2024-10-01 06:09:41.432692] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:15.842 [2024-10-01 06:09:41.432702] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:15.842 [2024-10-01 06:09:41.432714] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:15.842 [2024-10-01 06:09:41.432726] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:15.842 [2024-10-01 06:09:41.432733] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:15.842 [2024-10-01 06:09:41.432740] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:15.842 [2024-10-01 06:09:41.432748] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:15.842 [2024-10-01 06:09:41.432756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.842 [2024-10-01 06:09:41.432764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:15.842 [2024-10-01 06:09:41.432771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:17:15.842 [2024-10-01 06:09:41.432778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.842 [2024-10-01 06:09:41.432876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.842 [2024-10-01 06:09:41.432886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:15.842 [2024-10-01 06:09:41.432899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:15.842 [2024-10-01 06:09:41.432910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.842 [2024-10-01 06:09:41.433009] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:15.842 [2024-10-01 06:09:41.433024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:15.842 [2024-10-01 06:09:41.433034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:15.842 [2024-10-01 06:09:41.433067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:15.842 [2024-10-01 06:09:41.433095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.842 [2024-10-01 06:09:41.433116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:15.842 [2024-10-01 06:09:41.433124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:15.842 [2024-10-01 06:09:41.433138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.842 [2024-10-01 06:09:41.433146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:15.842 [2024-10-01 06:09:41.433158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:15.842 [2024-10-01 06:09:41.433166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:15.842 [2024-10-01 06:09:41.433186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:15.842 [2024-10-01 06:09:41.433231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:15.842 [2024-10-01 06:09:41.433255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:15.842 [2024-10-01 06:09:41.433283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:15.842 [2024-10-01 06:09:41.433317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:15.842 [2024-10-01 06:09:41.433340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.842 [2024-10-01 06:09:41.433355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:15.842 [2024-10-01 06:09:41.433363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:15.842 [2024-10-01 06:09:41.433371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.842 [2024-10-01 06:09:41.433379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:15.842 [2024-10-01 06:09:41.433387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:15.842 [2024-10-01 06:09:41.433395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:15.842 [2024-10-01 06:09:41.433412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:15.842 [2024-10-01 06:09:41.433419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433427] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:15.842 [2024-10-01 06:09:41.433437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:15.842 [2024-10-01 06:09:41.433445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.842 [2024-10-01 06:09:41.433463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:15.842 [2024-10-01 06:09:41.433470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:15.842 [2024-10-01 06:09:41.433477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:15.842 [2024-10-01 06:09:41.433484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:15.842 [2024-10-01 06:09:41.433491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:15.842 [2024-10-01 06:09:41.433498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:15.842 [2024-10-01 06:09:41.433506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:15.842 [2024-10-01 06:09:41.433516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.842 [2024-10-01 06:09:41.433525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:15.842 [2024-10-01 06:09:41.433532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:15.842 [2024-10-01 06:09:41.433539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:15.842 [2024-10-01 06:09:41.433546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:15.842 [2024-10-01 06:09:41.433553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:15.842 [2024-10-01 06:09:41.433562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:15.842 [2024-10-01 06:09:41.433569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:15.842 [2024-10-01 06:09:41.433576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:15.842 [2024-10-01 06:09:41.433584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:15.842 [2024-10-01 06:09:41.433597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:15.842 [2024-10-01 06:09:41.433604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:15.842 [2024-10-01 06:09:41.433611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:15.842 [2024-10-01 06:09:41.433618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:15.842 [2024-10-01 06:09:41.433626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:15.843 [2024-10-01 06:09:41.433633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:15.843 [2024-10-01 06:09:41.433641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.843 [2024-10-01 06:09:41.433649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:15.843 [2024-10-01 06:09:41.433656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:15.843 [2024-10-01 06:09:41.433666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:15.843 [2024-10-01 06:09:41.433674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:15.843 [2024-10-01 06:09:41.433682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.843 [2024-10-01 06:09:41.433695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:15.843 [2024-10-01 06:09:41.433702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:17:15.843 [2024-10-01 06:09:41.433709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.843 [2024-10-01 06:09:41.455764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.843 [2024-10-01 06:09:41.455916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.843 [2024-10-01 06:09:41.455938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.003 ms 00:17:15.843 [2024-10-01 06:09:41.455947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.843 [2024-10-01 06:09:41.456036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.843 [2024-10-01 06:09:41.456046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:15.843 [2024-10-01 06:09:41.456055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:15.843 [2024-10-01 06:09:41.456063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.466905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.467038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.102 [2024-10-01 06:09:41.467056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.780 ms 00:17:16.102 [2024-10-01 06:09:41.467073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.467111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.467123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.102 [2024-10-01 06:09:41.467133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:16.102 [2024-10-01 06:09:41.467142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.467597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.467624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.102 [2024-10-01 06:09:41.467637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:17:16.102 [2024-10-01 06:09:41.467648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.467812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.467824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.102 [2024-10-01 06:09:41.467836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:16.102 [2024-10-01 06:09:41.467868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.473763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.473795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.102 [2024-10-01 06:09:41.473809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.868 ms 00:17:16.102 [2024-10-01 06:09:41.473817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.476597] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:16.102 [2024-10-01 06:09:41.476633] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:16.102 [2024-10-01 06:09:41.476647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.476656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:16.102 [2024-10-01 06:09:41.476665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:17:16.102 [2024-10-01 06:09:41.476672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.491672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.491724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:16.102 [2024-10-01 06:09:41.491741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.962 ms 00:17:16.102 [2024-10-01 06:09:41.491753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.493748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.493885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:16.102 [2024-10-01 06:09:41.493901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:17:16.102 [2024-10-01 06:09:41.493909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.495389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.495421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:16.102 [2024-10-01 06:09:41.495430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:17:16.102 [2024-10-01 06:09:41.495438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.495768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.495780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:16.102 [2024-10-01 06:09:41.495789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:16.102 [2024-10-01 06:09:41.495796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.513522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.513676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:16.102 [2024-10-01 06:09:41.513700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.709 ms 00:17:16.102 [2024-10-01 06:09:41.513711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.521380] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:16.102 [2024-10-01 06:09:41.523896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.524001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:16.102 [2024-10-01 06:09:41.524018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.151 ms 00:17:16.102 [2024-10-01 06:09:41.524026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.524116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.524127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:16.102 [2024-10-01 06:09:41.524136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:16.102 [2024-10-01 06:09:41.524144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.524218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.524228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:16.102 [2024-10-01 06:09:41.524236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:16.102 [2024-10-01 06:09:41.524249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.524276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.524290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:16.102 [2024-10-01 06:09:41.524298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:16.102 [2024-10-01 06:09:41.524306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.524340] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:16.102 [2024-10-01 06:09:41.524351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.524364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:16.102 [2024-10-01 06:09:41.524372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:16.102 [2024-10-01 06:09:41.524380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.528272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.528311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:16.102 [2024-10-01 06:09:41.528321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:17:16.102 [2024-10-01 06:09:41.528329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.528399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.102 [2024-10-01 06:09:41.528410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:16.102 [2024-10-01 06:09:41.528418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:16.102 [2024-10-01 06:09:41.528426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.102 [2024-10-01 06:09:41.529531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.143 ms, result 0 00:18:15.788  Copying: 34/1024 [MB] (34 MBps) Copying: 78/1024 [MB] (43 MBps) Copying: 109/1024 [MB] (31 MBps) Copying: 149/1024 [MB] (39 MBps) Copying: 191/1024 [MB] (41 MBps) Copying: 225/1024 [MB] (34 MBps) Copying: 258/1024 [MB] (32 MBps) Copying: 291/1024 [MB] (32 MBps) Copying: 334/1024 [MB] (43 MBps) Copying: 362/1024 [MB] (27 MBps) Copying: 378/1024 [MB] (16 MBps) Copying: 403/1024 [MB] (25 MBps) Copying: 423/1024 [MB] (19 MBps) Copying: 439/1024 [MB] (16 MBps) Copying: 456/1024 [MB] (17 MBps) Copying: 473/1024 [MB] (17 MBps) Copying: 493/1024 [MB] (19 MBps) Copying: 510/1024 [MB] (17 MBps) Copying: 534/1024 [MB] (24 MBps) Copying: 554/1024 [MB] (20 MBps) Copying: 565/1024 [MB] (11 MBps) Copying: 576/1024 [MB] (10 MBps) Copying: 587/1024 [MB] (11 MBps) Copying: 601/1024 [MB] (14 MBps) Copying: 614/1024 [MB] (13 MBps) Copying: 629/1024 [MB] (14 MBps) Copying: 643/1024 [MB] (13 MBps) Copying: 654/1024 [MB] (11 MBps) Copying: 664/1024 [MB] (10 MBps) Copying: 690888/1048576 [kB] (10112 kBps) Copying: 685/1024 [MB] (10 MBps) Copying: 695/1024 [MB] (10 MBps) Copying: 706/1024 [MB] (10 MBps) Copying: 718/1024 [MB] (12 MBps) Copying: 730/1024 [MB] (11 MBps) Copying: 743/1024 [MB] (12 MBps) Copying: 754/1024 [MB] (11 MBps) Copying: 766/1024 [MB] (11 MBps) Copying: 777/1024 [MB] (11 MBps) Copying: 788/1024 [MB] (11 MBps) Copying: 800/1024 [MB] (11 MBps) Copying: 817/1024 [MB] (17 MBps) Copying: 830/1024 [MB] (12 MBps) Copying: 841/1024 [MB] (11 MBps) Copying: 853/1024 [MB] (12 MBps) Copying: 867/1024 [MB] (14 MBps) Copying: 879/1024 [MB] (11 MBps) Copying: 890/1024 [MB] (11 MBps) Copying: 905/1024 [MB] (14 MBps) Copying: 916/1024 [MB] (11 MBps) Copying: 926/1024 [MB] (10 MBps) Copying: 938/1024 [MB] (11 MBps) Copying: 951/1024 [MB] (12 MBps) Copying: 984208/1048576 [kB] (10180 kBps) Copying: 972/1024 [MB] (11 MBps) Copying: 983/1024 [MB] (10 MBps) Copying: 994/1024 [MB] (11 MBps) Copying: 1005/1024 [MB] (11 MBps) Copying: 1016/1024 [MB] (11 MBps) Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-01 06:10:41.168437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.168471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:15.788 [2024-10-01 06:10:41.168482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:15.788 [2024-10-01 06:10:41.168491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.168506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:15.788 [2024-10-01 06:10:41.168909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.168926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:15.788 [2024-10-01 06:10:41.168933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:18:15.788 [2024-10-01 06:10:41.168940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.171350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.171456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:15.788 [2024-10-01 06:10:41.171468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.395 ms 00:18:15.788 [2024-10-01 06:10:41.171479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.185831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.185870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:15.788 [2024-10-01 06:10:41.185895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.336 ms 00:18:15.788 [2024-10-01 06:10:41.185902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.190519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.190541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:15.788 [2024-10-01 06:10:41.190553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.594 ms 00:18:15.788 [2024-10-01 06:10:41.190559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.192727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.192754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:15.788 [2024-10-01 06:10:41.192762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:18:15.788 [2024-10-01 06:10:41.192768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.196495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.196599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:15.788 [2024-10-01 06:10:41.196610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.702 ms 00:18:15.788 [2024-10-01 06:10:41.196616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.196697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.196704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:15.788 [2024-10-01 06:10:41.196711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:15.788 [2024-10-01 06:10:41.196722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.199249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.199274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:15.788 [2024-10-01 06:10:41.199281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.516 ms 00:18:15.788 [2024-10-01 06:10:41.199286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.201487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.201512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:15.788 [2024-10-01 06:10:41.201518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.177 ms 00:18:15.788 [2024-10-01 06:10:41.201524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.203077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.203102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:15.788 [2024-10-01 06:10:41.203109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.530 ms 00:18:15.788 [2024-10-01 06:10:41.203114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.204901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.788 [2024-10-01 06:10:41.204925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:15.788 [2024-10-01 06:10:41.204932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:18:15.788 [2024-10-01 06:10:41.204937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.788 [2024-10-01 06:10:41.204960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:15.788 [2024-10-01 06:10:41.204976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:15.788 [2024-10-01 06:10:41.204990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:15.788 [2024-10-01 06:10:41.204996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:15.788 [2024-10-01 06:10:41.205002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:15.788 [2024-10-01 06:10:41.205007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:15.788 [2024-10-01 06:10:41.205013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:15.788 [2024-10-01 06:10:41.205019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:15.788 [2024-10-01 06:10:41.205025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:15.788 [2024-10-01 06:10:41.205031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:15.789 [2024-10-01 06:10:41.205431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:15.790 [2024-10-01 06:10:41.205580] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:15.790 [2024-10-01 06:10:41.205586] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88a7bfd7-7279-4462-a23e-993b19e95361 00:18:15.790 [2024-10-01 06:10:41.205593] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:15.790 [2024-10-01 06:10:41.205598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:15.790 [2024-10-01 06:10:41.205603] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:15.790 [2024-10-01 06:10:41.205609] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:15.790 [2024-10-01 06:10:41.205615] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:15.790 [2024-10-01 06:10:41.205620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:15.790 [2024-10-01 06:10:41.205626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:15.790 [2024-10-01 06:10:41.205631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:15.790 [2024-10-01 06:10:41.205636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:15.790 [2024-10-01 06:10:41.205641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.790 [2024-10-01 06:10:41.205648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:15.790 [2024-10-01 06:10:41.205654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:18:15.790 [2024-10-01 06:10:41.205663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.206919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.790 [2024-10-01 06:10:41.206937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:15.790 [2024-10-01 06:10:41.206944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:18:15.790 [2024-10-01 06:10:41.206951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.207021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.790 [2024-10-01 06:10:41.207028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:15.790 [2024-10-01 06:10:41.207037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:15.790 [2024-10-01 06:10:41.207042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.210781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.790 [2024-10-01 06:10:41.210911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:15.790 [2024-10-01 06:10:41.210924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.790 [2024-10-01 06:10:41.210929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.210971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.790 [2024-10-01 06:10:41.210978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:15.790 [2024-10-01 06:10:41.210989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.790 [2024-10-01 06:10:41.210996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.211038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.790 [2024-10-01 06:10:41.211046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:15.790 [2024-10-01 06:10:41.211052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.790 [2024-10-01 06:10:41.211058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.211069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.790 [2024-10-01 06:10:41.211075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:15.790 [2024-10-01 06:10:41.211081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.790 [2024-10-01 06:10:41.211088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.218462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.790 [2024-10-01 06:10:41.218579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:15.790 [2024-10-01 06:10:41.218591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.790 [2024-10-01 06:10:41.218597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.224791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.790 [2024-10-01 06:10:41.224822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:15.790 [2024-10-01 06:10:41.224838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.790 [2024-10-01 06:10:41.224859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.790 [2024-10-01 06:10:41.224897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.790 [2024-10-01 06:10:41.224904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.790 [2024-10-01 06:10:41.224911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.791 [2024-10-01 06:10:41.224917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.791 [2024-10-01 06:10:41.224946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.791 [2024-10-01 06:10:41.224953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.791 [2024-10-01 06:10:41.224959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.791 [2024-10-01 06:10:41.224966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.791 [2024-10-01 06:10:41.225018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.791 [2024-10-01 06:10:41.225026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.791 [2024-10-01 06:10:41.225032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.791 [2024-10-01 06:10:41.225038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.791 [2024-10-01 06:10:41.225059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.791 [2024-10-01 06:10:41.225066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:15.791 [2024-10-01 06:10:41.225072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.791 [2024-10-01 06:10:41.225081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.791 [2024-10-01 06:10:41.225111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.791 [2024-10-01 06:10:41.225119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.791 [2024-10-01 06:10:41.225128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.791 [2024-10-01 06:10:41.225135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.791 [2024-10-01 06:10:41.225166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.791 [2024-10-01 06:10:41.225175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.791 [2024-10-01 06:10:41.225181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.791 [2024-10-01 06:10:41.225187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.791 [2024-10-01 06:10:41.225286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.826 ms, result 0 00:18:16.051 00:18:16.051 00:18:16.051 06:10:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:16.051 [2024-10-01 06:10:41.555902] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:18:16.051 [2024-10-01 06:10:41.556166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86430 ] 00:18:16.311 [2024-10-01 06:10:41.691151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.312 [2024-10-01 06:10:41.726669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.312 [2024-10-01 06:10:41.819639] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:16.312 [2024-10-01 06:10:41.819712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:16.574 [2024-10-01 06:10:41.978274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.574 [2024-10-01 06:10:41.978330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:16.574 [2024-10-01 06:10:41.978349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:16.574 [2024-10-01 06:10:41.978358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.574 [2024-10-01 06:10:41.978413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.574 [2024-10-01 06:10:41.978425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:16.574 [2024-10-01 06:10:41.978434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:16.574 [2024-10-01 06:10:41.978448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.574 [2024-10-01 06:10:41.978470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:16.575 [2024-10-01 06:10:41.978768] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:16.575 [2024-10-01 06:10:41.978785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.978800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:16.575 [2024-10-01 06:10:41.978812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:18:16.575 [2024-10-01 06:10:41.978821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.980552] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:16.575 [2024-10-01 06:10:41.984504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.984708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:16.575 [2024-10-01 06:10:41.984729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.955 ms 00:18:16.575 [2024-10-01 06:10:41.984746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.984923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.984953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:16.575 [2024-10-01 06:10:41.984971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:16.575 [2024-10-01 06:10:41.984983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.992938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.992995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:16.575 [2024-10-01 06:10:41.993006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.907 ms 00:18:16.575 [2024-10-01 06:10:41.993019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.993131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.993142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:16.575 [2024-10-01 06:10:41.993151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:16.575 [2024-10-01 06:10:41.993160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.993226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.993270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:16.575 [2024-10-01 06:10:41.993280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:16.575 [2024-10-01 06:10:41.993287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.993311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:16.575 [2024-10-01 06:10:41.995425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.995588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:16.575 [2024-10-01 06:10:41.995605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:18:16.575 [2024-10-01 06:10:41.995621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.995659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.995668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:16.575 [2024-10-01 06:10:41.995676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:16.575 [2024-10-01 06:10:41.995688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.995711] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:16.575 [2024-10-01 06:10:41.995739] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:16.575 [2024-10-01 06:10:41.995781] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:16.575 [2024-10-01 06:10:41.995798] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:16.575 [2024-10-01 06:10:41.995925] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:16.575 [2024-10-01 06:10:41.995940] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:16.575 [2024-10-01 06:10:41.995954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:16.575 [2024-10-01 06:10:41.995965] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:16.575 [2024-10-01 06:10:41.995977] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:16.575 [2024-10-01 06:10:41.995987] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:16.575 [2024-10-01 06:10:41.995995] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:16.575 [2024-10-01 06:10:41.996005] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:16.575 [2024-10-01 06:10:41.996014] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:16.575 [2024-10-01 06:10:41.996022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.996030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:16.575 [2024-10-01 06:10:41.996038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:16.575 [2024-10-01 06:10:41.996046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.996130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.575 [2024-10-01 06:10:41.996142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:16.575 [2024-10-01 06:10:41.996155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:16.575 [2024-10-01 06:10:41.996165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.575 [2024-10-01 06:10:41.996264] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:16.575 [2024-10-01 06:10:41.996277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:16.575 [2024-10-01 06:10:41.996287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.575 [2024-10-01 06:10:41.996298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:16.575 [2024-10-01 06:10:41.996316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:16.575 [2024-10-01 06:10:41.996335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:16.575 [2024-10-01 06:10:41.996345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.575 [2024-10-01 06:10:41.996362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:16.575 [2024-10-01 06:10:41.996371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:16.575 [2024-10-01 06:10:41.996382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.575 [2024-10-01 06:10:41.996390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:16.575 [2024-10-01 06:10:41.996399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:16.575 [2024-10-01 06:10:41.996408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:16.575 [2024-10-01 06:10:41.996425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:16.575 [2024-10-01 06:10:41.996433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:16.575 [2024-10-01 06:10:41.996450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.575 [2024-10-01 06:10:41.996466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:16.575 [2024-10-01 06:10:41.996474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.575 [2024-10-01 06:10:41.996490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:16.575 [2024-10-01 06:10:41.996498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.575 [2024-10-01 06:10:41.996517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:16.575 [2024-10-01 06:10:41.996524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.575 [2024-10-01 06:10:41.996539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:16.575 [2024-10-01 06:10:41.996545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.575 [2024-10-01 06:10:41.996559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:16.575 [2024-10-01 06:10:41.996566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:16.575 [2024-10-01 06:10:41.996574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.575 [2024-10-01 06:10:41.996580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:16.575 [2024-10-01 06:10:41.996587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:16.575 [2024-10-01 06:10:41.996593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:16.575 [2024-10-01 06:10:41.996606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:16.575 [2024-10-01 06:10:41.996614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.575 [2024-10-01 06:10:41.996622] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:16.575 [2024-10-01 06:10:41.996632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:16.575 [2024-10-01 06:10:41.996640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.576 [2024-10-01 06:10:41.996652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.576 [2024-10-01 06:10:41.996660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:16.576 [2024-10-01 06:10:41.996668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:16.576 [2024-10-01 06:10:41.996675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:16.576 [2024-10-01 06:10:41.996682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:16.576 [2024-10-01 06:10:41.996689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:16.576 [2024-10-01 06:10:41.996696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:16.576 [2024-10-01 06:10:41.996705] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:16.576 [2024-10-01 06:10:41.996715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.576 [2024-10-01 06:10:41.996724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:16.576 [2024-10-01 06:10:41.996731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:16.576 [2024-10-01 06:10:41.996738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:16.576 [2024-10-01 06:10:41.996745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:16.576 [2024-10-01 06:10:41.996754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:16.576 [2024-10-01 06:10:41.996765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:16.576 [2024-10-01 06:10:41.996772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:16.576 [2024-10-01 06:10:41.996779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:16.576 [2024-10-01 06:10:41.996787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:16.576 [2024-10-01 06:10:41.996801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:16.576 [2024-10-01 06:10:41.996808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:16.576 [2024-10-01 06:10:41.996815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:16.576 [2024-10-01 06:10:41.996822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:16.576 [2024-10-01 06:10:41.996830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:16.576 [2024-10-01 06:10:41.996838] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:16.576 [2024-10-01 06:10:41.996867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.576 [2024-10-01 06:10:41.996875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:16.576 [2024-10-01 06:10:41.996883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:16.576 [2024-10-01 06:10:41.996891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:16.576 [2024-10-01 06:10:41.996900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:16.576 [2024-10-01 06:10:41.996909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:41.996919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:16.576 [2024-10-01 06:10:41.996930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:18:16.576 [2024-10-01 06:10:41.996938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.018734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.018803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:16.576 [2024-10-01 06:10:42.018826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.746 ms 00:18:16.576 [2024-10-01 06:10:42.018840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.018986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.019002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:16.576 [2024-10-01 06:10:42.019013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:16.576 [2024-10-01 06:10:42.019030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.031453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.031502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.576 [2024-10-01 06:10:42.031514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.345 ms 00:18:16.576 [2024-10-01 06:10:42.031522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.031564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.031577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.576 [2024-10-01 06:10:42.031585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:16.576 [2024-10-01 06:10:42.031594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.032178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.032217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.576 [2024-10-01 06:10:42.032234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:18:16.576 [2024-10-01 06:10:42.032243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.032390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.032400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.576 [2024-10-01 06:10:42.032410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:16.576 [2024-10-01 06:10:42.032420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.039260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.039306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.576 [2024-10-01 06:10:42.039323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.817 ms 00:18:16.576 [2024-10-01 06:10:42.039332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.043203] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:16.576 [2024-10-01 06:10:42.043258] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:16.576 [2024-10-01 06:10:42.043270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.043279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:16.576 [2024-10-01 06:10:42.043288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.837 ms 00:18:16.576 [2024-10-01 06:10:42.043296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.059535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.059590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:16.576 [2024-10-01 06:10:42.059606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.182 ms 00:18:16.576 [2024-10-01 06:10:42.059615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.062906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.062960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:16.576 [2024-10-01 06:10:42.062973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:18:16.576 [2024-10-01 06:10:42.062981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.066691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.066805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:16.576 [2024-10-01 06:10:42.066839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:18:16.576 [2024-10-01 06:10:42.066905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.067929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.067994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:16.576 [2024-10-01 06:10:42.068023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:18:16.576 [2024-10-01 06:10:42.068058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.099594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.099652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:16.576 [2024-10-01 06:10:42.099665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.492 ms 00:18:16.576 [2024-10-01 06:10:42.099674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.108161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:16.576 [2024-10-01 06:10:42.111058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.111099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:16.576 [2024-10-01 06:10:42.111120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.332 ms 00:18:16.576 [2024-10-01 06:10:42.111131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.111207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.111218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:16.576 [2024-10-01 06:10:42.111228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:16.576 [2024-10-01 06:10:42.111236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.576 [2024-10-01 06:10:42.111304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.576 [2024-10-01 06:10:42.111320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:16.576 [2024-10-01 06:10:42.111331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:16.576 [2024-10-01 06:10:42.111345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.577 [2024-10-01 06:10:42.111377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.577 [2024-10-01 06:10:42.111392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:16.577 [2024-10-01 06:10:42.111402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:16.577 [2024-10-01 06:10:42.111410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.577 [2024-10-01 06:10:42.111446] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:16.577 [2024-10-01 06:10:42.111457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.577 [2024-10-01 06:10:42.111465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:16.577 [2024-10-01 06:10:42.111473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:16.577 [2024-10-01 06:10:42.111481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.577 [2024-10-01 06:10:42.117127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.577 [2024-10-01 06:10:42.117314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:16.577 [2024-10-01 06:10:42.117335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.626 ms 00:18:16.577 [2024-10-01 06:10:42.117344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.577 [2024-10-01 06:10:42.117424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.577 [2024-10-01 06:10:42.117435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:16.577 [2024-10-01 06:10:42.117444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:16.577 [2024-10-01 06:10:42.117455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.577 [2024-10-01 06:10:42.118567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.842 ms, result 0 00:19:23.324  Copying: 13/1024 [MB] (13 MBps) Copying: 28/1024 [MB] (14 MBps) Copying: 41/1024 [MB] (13 MBps) Copying: 55/1024 [MB] (14 MBps) Copying: 69/1024 [MB] (14 MBps) Copying: 81/1024 [MB] (11 MBps) Copying: 91/1024 [MB] (10 MBps) Copying: 103/1024 [MB] (11 MBps) Copying: 115/1024 [MB] (11 MBps) Copying: 126/1024 [MB] (11 MBps) Copying: 137/1024 [MB] (11 MBps) Copying: 153/1024 [MB] (15 MBps) Copying: 165/1024 [MB] (12 MBps) Copying: 176/1024 [MB] (10 MBps) Copying: 187/1024 [MB] (10 MBps) Copying: 202/1024 [MB] (15 MBps) Copying: 222/1024 [MB] (20 MBps) Copying: 238/1024 [MB] (16 MBps) Copying: 250/1024 [MB] (12 MBps) Copying: 271/1024 [MB] (20 MBps) Copying: 291/1024 [MB] (20 MBps) Copying: 308/1024 [MB] (16 MBps) Copying: 328/1024 [MB] (19 MBps) Copying: 341/1024 [MB] (13 MBps) Copying: 360/1024 [MB] (18 MBps) Copying: 372/1024 [MB] (12 MBps) Copying: 389/1024 [MB] (16 MBps) Copying: 407/1024 [MB] (18 MBps) Copying: 420/1024 [MB] (13 MBps) Copying: 440/1024 [MB] (19 MBps) Copying: 460/1024 [MB] (19 MBps) Copying: 482/1024 [MB] (21 MBps) Copying: 493/1024 [MB] (11 MBps) Copying: 509/1024 [MB] (15 MBps) Copying: 519/1024 [MB] (10 MBps) Copying: 537/1024 [MB] (17 MBps) Copying: 553/1024 [MB] (15 MBps) Copying: 566/1024 [MB] (13 MBps) Copying: 576/1024 [MB] (10 MBps) Copying: 587/1024 [MB] (10 MBps) Copying: 598/1024 [MB] (10 MBps) Copying: 613/1024 [MB] (14 MBps) Copying: 623/1024 [MB] (10 MBps) Copying: 636/1024 [MB] (13 MBps) Copying: 666/1024 [MB] (29 MBps) Copying: 679/1024 [MB] (12 MBps) Copying: 702/1024 [MB] (23 MBps) Copying: 722/1024 [MB] (20 MBps) Copying: 744/1024 [MB] (21 MBps) Copying: 761/1024 [MB] (17 MBps) Copying: 783/1024 [MB] (21 MBps) Copying: 794/1024 [MB] (11 MBps) Copying: 812/1024 [MB] (17 MBps) Copying: 825/1024 [MB] (13 MBps) Copying: 841/1024 [MB] (16 MBps) Copying: 851/1024 [MB] (10 MBps) Copying: 862/1024 [MB] (10 MBps) Copying: 872/1024 [MB] (10 MBps) Copying: 884/1024 [MB] (11 MBps) Copying: 904/1024 [MB] (19 MBps) Copying: 914/1024 [MB] (10 MBps) Copying: 938/1024 [MB] (23 MBps) Copying: 960/1024 [MB] (22 MBps) Copying: 982/1024 [MB] (22 MBps) Copying: 1006/1024 [MB] (24 MBps) Copying: 1022/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-01 06:11:48.777756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.778196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:23.324 [2024-10-01 06:11:48.778228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:23.324 [2024-10-01 06:11:48.778239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.778289] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:23.324 [2024-10-01 06:11:48.779297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.779336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:23.324 [2024-10-01 06:11:48.779349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:19:23.324 [2024-10-01 06:11:48.779359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.779620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.779631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:23.324 [2024-10-01 06:11:48.779641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:19:23.324 [2024-10-01 06:11:48.779650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.784076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.784115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:23.324 [2024-10-01 06:11:48.784125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.408 ms 00:19:23.324 [2024-10-01 06:11:48.784134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.790485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.790530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:23.324 [2024-10-01 06:11:48.790542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.329 ms 00:19:23.324 [2024-10-01 06:11:48.790551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.794611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.794667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:23.324 [2024-10-01 06:11:48.794679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.982 ms 00:19:23.324 [2024-10-01 06:11:48.794687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.800516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.800570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:23.324 [2024-10-01 06:11:48.800582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.782 ms 00:19:23.324 [2024-10-01 06:11:48.800591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.800741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.800754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:23.324 [2024-10-01 06:11:48.800766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:23.324 [2024-10-01 06:11:48.800774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.804202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.804392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:23.324 [2024-10-01 06:11:48.804411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.409 ms 00:19:23.324 [2024-10-01 06:11:48.804420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.324 [2024-10-01 06:11:48.807686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.324 [2024-10-01 06:11:48.807867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:23.325 [2024-10-01 06:11:48.807885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:19:23.325 [2024-10-01 06:11:48.807894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.325 [2024-10-01 06:11:48.810201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.325 [2024-10-01 06:11:48.810249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:23.325 [2024-10-01 06:11:48.810259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.268 ms 00:19:23.325 [2024-10-01 06:11:48.810266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.325 [2024-10-01 06:11:48.812422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.325 [2024-10-01 06:11:48.812469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:23.325 [2024-10-01 06:11:48.812479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.081 ms 00:19:23.325 [2024-10-01 06:11:48.812486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.325 [2024-10-01 06:11:48.812523] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:23.325 [2024-10-01 06:11:48.812549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.812996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:23.325 [2024-10-01 06:11:48.813345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:23.326 [2024-10-01 06:11:48.813518] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:23.326 [2024-10-01 06:11:48.813527] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88a7bfd7-7279-4462-a23e-993b19e95361 00:19:23.326 [2024-10-01 06:11:48.813535] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:23.326 [2024-10-01 06:11:48.813543] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:23.326 [2024-10-01 06:11:48.813552] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:23.326 [2024-10-01 06:11:48.813560] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:23.326 [2024-10-01 06:11:48.813568] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:23.326 [2024-10-01 06:11:48.813579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:23.326 [2024-10-01 06:11:48.813588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:23.326 [2024-10-01 06:11:48.813595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:23.326 [2024-10-01 06:11:48.813601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:23.326 [2024-10-01 06:11:48.813609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.326 [2024-10-01 06:11:48.813625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:23.326 [2024-10-01 06:11:48.813645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:19:23.326 [2024-10-01 06:11:48.813653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.816799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.326 [2024-10-01 06:11:48.816833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:23.326 [2024-10-01 06:11:48.816866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:19:23.326 [2024-10-01 06:11:48.816878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.817030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.326 [2024-10-01 06:11:48.817046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:23.326 [2024-10-01 06:11:48.817056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:23.326 [2024-10-01 06:11:48.817065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.826198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.826248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.326 [2024-10-01 06:11:48.826260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.826271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.826347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.826363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:23.326 [2024-10-01 06:11:48.826372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.826380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.826454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.826466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:23.326 [2024-10-01 06:11:48.826475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.826484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.826503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.826512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:23.326 [2024-10-01 06:11:48.826524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.826535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.845932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.846173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.326 [2024-10-01 06:11:48.846195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.846205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.861692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.861905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.326 [2024-10-01 06:11:48.861936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.861947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.862011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.862022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.326 [2024-10-01 06:11:48.862031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.862041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.862091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.862101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.326 [2024-10-01 06:11:48.862111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.862123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.862221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.862232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.326 [2024-10-01 06:11:48.862241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.862251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.862290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.862301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:23.326 [2024-10-01 06:11:48.862315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.862324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.862385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.862397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.326 [2024-10-01 06:11:48.862406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.862416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.862480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.326 [2024-10-01 06:11:48.862494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.326 [2024-10-01 06:11:48.862503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.326 [2024-10-01 06:11:48.862517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.326 [2024-10-01 06:11:48.862688] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.892 ms, result 0 00:19:23.587 00:19:23.587 00:19:23.587 06:11:49 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:26.138 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:26.138 06:11:51 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:26.138 [2024-10-01 06:11:51.291938] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:19:26.138 [2024-10-01 06:11:51.292063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87157 ] 00:19:26.138 [2024-10-01 06:11:51.430302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.138 [2024-10-01 06:11:51.500187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.138 [2024-10-01 06:11:51.647808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:26.138 [2024-10-01 06:11:51.647920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:26.401 [2024-10-01 06:11:51.812025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.812086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:26.401 [2024-10-01 06:11:51.812106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:26.401 [2024-10-01 06:11:51.812114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.812174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.812185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:26.401 [2024-10-01 06:11:51.812195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:26.401 [2024-10-01 06:11:51.812213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.812239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:26.401 [2024-10-01 06:11:51.812524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:26.401 [2024-10-01 06:11:51.812542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.812551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:26.401 [2024-10-01 06:11:51.812564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:19:26.401 [2024-10-01 06:11:51.812573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.814863] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:26.401 [2024-10-01 06:11:51.819403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.819462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:26.401 [2024-10-01 06:11:51.819474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.560 ms 00:19:26.401 [2024-10-01 06:11:51.819483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.819565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.819579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:26.401 [2024-10-01 06:11:51.819591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:26.401 [2024-10-01 06:11:51.819600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.831087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.831275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:26.401 [2024-10-01 06:11:51.831295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.439 ms 00:19:26.401 [2024-10-01 06:11:51.831305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.831427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.831438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:26.401 [2024-10-01 06:11:51.831447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:26.401 [2024-10-01 06:11:51.831459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.831522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.831533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:26.401 [2024-10-01 06:11:51.831543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:26.401 [2024-10-01 06:11:51.831555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.831578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.401 [2024-10-01 06:11:51.834238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.834278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:26.401 [2024-10-01 06:11:51.834290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:19:26.401 [2024-10-01 06:11:51.834298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.834335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.834344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:26.401 [2024-10-01 06:11:51.834354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:26.401 [2024-10-01 06:11:51.834362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.834385] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:26.401 [2024-10-01 06:11:51.834424] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:26.401 [2024-10-01 06:11:51.834471] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:26.401 [2024-10-01 06:11:51.834488] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:26.401 [2024-10-01 06:11:51.834601] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:26.401 [2024-10-01 06:11:51.834613] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:26.401 [2024-10-01 06:11:51.834625] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:26.401 [2024-10-01 06:11:51.834637] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:26.401 [2024-10-01 06:11:51.834650] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:26.401 [2024-10-01 06:11:51.834659] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:26.401 [2024-10-01 06:11:51.834667] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:26.401 [2024-10-01 06:11:51.834675] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:26.401 [2024-10-01 06:11:51.834685] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:26.401 [2024-10-01 06:11:51.834693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.834702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:26.401 [2024-10-01 06:11:51.834710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:26.401 [2024-10-01 06:11:51.834718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.834805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.401 [2024-10-01 06:11:51.834815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:26.401 [2024-10-01 06:11:51.834828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:26.401 [2024-10-01 06:11:51.834837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.401 [2024-10-01 06:11:51.835135] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:26.401 [2024-10-01 06:11:51.835181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:26.402 [2024-10-01 06:11:51.835209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:26.402 [2024-10-01 06:11:51.835233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:26.402 [2024-10-01 06:11:51.835277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:26.402 [2024-10-01 06:11:51.835321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:26.402 [2024-10-01 06:11:51.835342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:26.402 [2024-10-01 06:11:51.835384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:26.402 [2024-10-01 06:11:51.835403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:26.402 [2024-10-01 06:11:51.835495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:26.402 [2024-10-01 06:11:51.835522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:26.402 [2024-10-01 06:11:51.835542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:26.402 [2024-10-01 06:11:51.835561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:26.402 [2024-10-01 06:11:51.835597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:26.402 [2024-10-01 06:11:51.835616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:26.402 [2024-10-01 06:11:51.835654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.402 [2024-10-01 06:11:51.835690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:26.402 [2024-10-01 06:11:51.835709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.402 [2024-10-01 06:11:51.835795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:26.402 [2024-10-01 06:11:51.835814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.402 [2024-10-01 06:11:51.835881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:26.402 [2024-10-01 06:11:51.835904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:26.402 [2024-10-01 06:11:51.835922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:26.402 [2024-10-01 06:11:51.835941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:26.402 [2024-10-01 06:11:51.836005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:26.402 [2024-10-01 06:11:51.836016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:26.402 [2024-10-01 06:11:51.836024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:26.402 [2024-10-01 06:11:51.836031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:26.402 [2024-10-01 06:11:51.836038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:26.402 [2024-10-01 06:11:51.836046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:26.402 [2024-10-01 06:11:51.836053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:26.402 [2024-10-01 06:11:51.836061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.402 [2024-10-01 06:11:51.836068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:26.402 [2024-10-01 06:11:51.836075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:26.402 [2024-10-01 06:11:51.836082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.402 [2024-10-01 06:11:51.836090] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:26.402 [2024-10-01 06:11:51.836104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:26.402 [2024-10-01 06:11:51.836118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:26.402 [2024-10-01 06:11:51.836129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:26.402 [2024-10-01 06:11:51.836138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:26.402 [2024-10-01 06:11:51.836145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:26.402 [2024-10-01 06:11:51.836152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:26.402 [2024-10-01 06:11:51.836160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:26.402 [2024-10-01 06:11:51.836167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:26.402 [2024-10-01 06:11:51.836174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:26.402 [2024-10-01 06:11:51.836184] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:26.402 [2024-10-01 06:11:51.836196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:26.402 [2024-10-01 06:11:51.836206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:26.402 [2024-10-01 06:11:51.836214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:26.402 [2024-10-01 06:11:51.836222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:26.402 [2024-10-01 06:11:51.836230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:26.402 [2024-10-01 06:11:51.836238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:26.402 [2024-10-01 06:11:51.836250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:26.402 [2024-10-01 06:11:51.836258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:26.402 [2024-10-01 06:11:51.836265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:26.402 [2024-10-01 06:11:51.836272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:26.402 [2024-10-01 06:11:51.836288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:26.402 [2024-10-01 06:11:51.836296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:26.402 [2024-10-01 06:11:51.836304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:26.402 [2024-10-01 06:11:51.836312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:26.402 [2024-10-01 06:11:51.836319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:26.402 [2024-10-01 06:11:51.836327] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:26.402 [2024-10-01 06:11:51.836336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:26.402 [2024-10-01 06:11:51.836345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:26.402 [2024-10-01 06:11:51.836353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:26.402 [2024-10-01 06:11:51.836361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:26.402 [2024-10-01 06:11:51.836368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:26.402 [2024-10-01 06:11:51.836377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.402 [2024-10-01 06:11:51.836388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:26.402 [2024-10-01 06:11:51.836399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:19:26.402 [2024-10-01 06:11:51.836407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.402 [2024-10-01 06:11:51.865735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.865797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.403 [2024-10-01 06:11:51.865816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.247 ms 00:19:26.403 [2024-10-01 06:11:51.865826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.865964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.865977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.403 [2024-10-01 06:11:51.865987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:26.403 [2024-10-01 06:11:51.865996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.882010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.882057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.403 [2024-10-01 06:11:51.882069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.925 ms 00:19:26.403 [2024-10-01 06:11:51.882078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.882122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.882132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.403 [2024-10-01 06:11:51.882141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:26.403 [2024-10-01 06:11:51.882149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.882879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.882925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.403 [2024-10-01 06:11:51.882938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:19:26.403 [2024-10-01 06:11:51.882948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.883119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.883128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.403 [2024-10-01 06:11:51.883136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:26.403 [2024-10-01 06:11:51.883145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.892598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.892642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.403 [2024-10-01 06:11:51.892660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.429 ms 00:19:26.403 [2024-10-01 06:11:51.892669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.897462] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:26.403 [2024-10-01 06:11:51.897515] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:26.403 [2024-10-01 06:11:51.897529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.897539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:26.403 [2024-10-01 06:11:51.897548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:19:26.403 [2024-10-01 06:11:51.897555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.913647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.913699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:26.403 [2024-10-01 06:11:51.913715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.041 ms 00:19:26.403 [2024-10-01 06:11:51.913724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.916889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.916931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:26.403 [2024-10-01 06:11:51.916942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:19:26.403 [2024-10-01 06:11:51.916950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.919935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.920117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:26.403 [2024-10-01 06:11:51.920137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:19:26.403 [2024-10-01 06:11:51.920145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.920502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.920517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.403 [2024-10-01 06:11:51.920528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:26.403 [2024-10-01 06:11:51.920536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.949283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.949385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:26.403 [2024-10-01 06:11:51.949401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.726 ms 00:19:26.403 [2024-10-01 06:11:51.949411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.958437] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:26.403 [2024-10-01 06:11:51.961964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.962007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.403 [2024-10-01 06:11:51.962027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.493 ms 00:19:26.403 [2024-10-01 06:11:51.962049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.962130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.962142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:26.403 [2024-10-01 06:11:51.962153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:26.403 [2024-10-01 06:11:51.962162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.962251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.962263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.403 [2024-10-01 06:11:51.962272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:26.403 [2024-10-01 06:11:51.962284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.962324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.962336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.403 [2024-10-01 06:11:51.962345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:26.403 [2024-10-01 06:11:51.962354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.962406] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:26.403 [2024-10-01 06:11:51.962422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.962431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:26.403 [2024-10-01 06:11:51.962441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:26.403 [2024-10-01 06:11:51.962450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.968805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.968884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.403 [2024-10-01 06:11:51.968897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.330 ms 00:19:26.403 [2024-10-01 06:11:51.968905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.969005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.403 [2024-10-01 06:11:51.969016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.403 [2024-10-01 06:11:51.969026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:26.403 [2024-10-01 06:11:51.969039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.403 [2024-10-01 06:11:51.970451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.845 ms, result 0 00:20:23.401  Copying: 10/1024 [MB] (10 MBps) Copying: 21/1024 [MB] (11 MBps) Copying: 33/1024 [MB] (11 MBps) Copying: 44008/1048576 [kB] (10212 kBps) Copying: 53/1024 [MB] (10 MBps) Copying: 66/1024 [MB] (13 MBps) Copying: 94/1024 [MB] (27 MBps) Copying: 120/1024 [MB] (26 MBps) Copying: 133972/1048576 [kB] (10092 kBps) Copying: 150/1024 [MB] (19 MBps) Copying: 198/1024 [MB] (48 MBps) Copying: 211/1024 [MB] (12 MBps) Copying: 229/1024 [MB] (18 MBps) Copying: 247/1024 [MB] (17 MBps) Copying: 261/1024 [MB] (14 MBps) Copying: 271/1024 [MB] (10 MBps) Copying: 286/1024 [MB] (14 MBps) Copying: 308/1024 [MB] (21 MBps) Copying: 320/1024 [MB] (12 MBps) Copying: 330/1024 [MB] (10 MBps) Copying: 347/1024 [MB] (16 MBps) Copying: 361/1024 [MB] (13 MBps) Copying: 371/1024 [MB] (10 MBps) Copying: 381/1024 [MB] (10 MBps) Copying: 392/1024 [MB] (10 MBps) Copying: 402/1024 [MB] (10 MBps) Copying: 412/1024 [MB] (10 MBps) Copying: 422/1024 [MB] (10 MBps) Copying: 448/1024 [MB] (25 MBps) Copying: 477/1024 [MB] (28 MBps) Copying: 505/1024 [MB] (28 MBps) Copying: 532/1024 [MB] (26 MBps) Copying: 556/1024 [MB] (24 MBps) Copying: 570/1024 [MB] (13 MBps) Copying: 586/1024 [MB] (16 MBps) Copying: 598/1024 [MB] (11 MBps) Copying: 609/1024 [MB] (11 MBps) Copying: 620/1024 [MB] (11 MBps) Copying: 638/1024 [MB] (17 MBps) Copying: 654/1024 [MB] (15 MBps) Copying: 685/1024 [MB] (31 MBps) Copying: 718/1024 [MB] (32 MBps) Copying: 734/1024 [MB] (15 MBps) Copying: 750/1024 [MB] (16 MBps) Copying: 776/1024 [MB] (25 MBps) Copying: 802/1024 [MB] (25 MBps) Copying: 832/1024 [MB] (30 MBps) Copying: 850/1024 [MB] (17 MBps) Copying: 875/1024 [MB] (24 MBps) Copying: 898/1024 [MB] (22 MBps) Copying: 919/1024 [MB] (21 MBps) Copying: 929/1024 [MB] (10 MBps) Copying: 954/1024 [MB] (24 MBps) Copying: 982/1024 [MB] (28 MBps) Copying: 995/1024 [MB] (12 MBps) Copying: 1018/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-01 06:12:48.986262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.401 [2024-10-01 06:12:48.986345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:23.401 [2024-10-01 06:12:48.986364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:23.401 [2024-10-01 06:12:48.986375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.401 [2024-10-01 06:12:48.988791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:23.401 [2024-10-01 06:12:48.994248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.401 [2024-10-01 06:12:48.994300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:23.401 [2024-10-01 06:12:48.994313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.408 ms 00:20:23.401 [2024-10-01 06:12:48.994324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.401 [2024-10-01 06:12:49.005951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.401 [2024-10-01 06:12:49.006131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:23.401 [2024-10-01 06:12:49.006194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.233 ms 00:20:23.401 [2024-10-01 06:12:49.006218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.662 [2024-10-01 06:12:49.028994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.662 [2024-10-01 06:12:49.029181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:23.663 [2024-10-01 06:12:49.029256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.742 ms 00:20:23.663 [2024-10-01 06:12:49.029282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.663 [2024-10-01 06:12:49.035486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.663 [2024-10-01 06:12:49.035659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:23.663 [2024-10-01 06:12:49.035679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.135 ms 00:20:23.663 [2024-10-01 06:12:49.035699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.663 [2024-10-01 06:12:49.038804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.663 [2024-10-01 06:12:49.038873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:23.663 [2024-10-01 06:12:49.038885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.036 ms 00:20:23.663 [2024-10-01 06:12:49.038894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.663 [2024-10-01 06:12:49.044584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.663 [2024-10-01 06:12:49.044640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:23.663 [2024-10-01 06:12:49.044664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.642 ms 00:20:23.663 [2024-10-01 06:12:49.044673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.924 [2024-10-01 06:12:49.289980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.925 [2024-10-01 06:12:49.290199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:23.925 [2024-10-01 06:12:49.290233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 245.274 ms 00:20:23.925 [2024-10-01 06:12:49.290244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.925 [2024-10-01 06:12:49.294198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.925 [2024-10-01 06:12:49.294252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:23.925 [2024-10-01 06:12:49.294263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.930 ms 00:20:23.925 [2024-10-01 06:12:49.294273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.925 [2024-10-01 06:12:49.297182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.925 [2024-10-01 06:12:49.297396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:23.925 [2024-10-01 06:12:49.297416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:20:23.925 [2024-10-01 06:12:49.297425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.925 [2024-10-01 06:12:49.299714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.925 [2024-10-01 06:12:49.299766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:23.925 [2024-10-01 06:12:49.299778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:20:23.925 [2024-10-01 06:12:49.299786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.925 [2024-10-01 06:12:49.302139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.925 [2024-10-01 06:12:49.302191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:23.925 [2024-10-01 06:12:49.302203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:20:23.925 [2024-10-01 06:12:49.302211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.925 [2024-10-01 06:12:49.302254] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:23.925 [2024-10-01 06:12:49.302272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101632 / 261120 wr_cnt: 1 state: open 00:20:23.925 [2024-10-01 06:12:49.302285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:23.925 [2024-10-01 06:12:49.302874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.302991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:23.926 [2024-10-01 06:12:49.303202] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:23.926 [2024-10-01 06:12:49.303214] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88a7bfd7-7279-4462-a23e-993b19e95361 00:20:23.926 [2024-10-01 06:12:49.303224] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101632 00:20:23.926 [2024-10-01 06:12:49.303234] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102592 00:20:23.926 [2024-10-01 06:12:49.303242] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101632 00:20:23.926 [2024-10-01 06:12:49.303265] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:20:23.926 [2024-10-01 06:12:49.303273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:23.926 [2024-10-01 06:12:49.303282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:23.926 [2024-10-01 06:12:49.303293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:23.926 [2024-10-01 06:12:49.303300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:23.926 [2024-10-01 06:12:49.303308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:23.926 [2024-10-01 06:12:49.303317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.926 [2024-10-01 06:12:49.303326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:23.926 [2024-10-01 06:12:49.303335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:20:23.926 [2024-10-01 06:12:49.303343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.306556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.926 [2024-10-01 06:12:49.306592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:23.926 [2024-10-01 06:12:49.306604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:20:23.926 [2024-10-01 06:12:49.306613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.306781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.926 [2024-10-01 06:12:49.306797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:23.926 [2024-10-01 06:12:49.306809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:20:23.926 [2024-10-01 06:12:49.306817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.316111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.316287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.926 [2024-10-01 06:12:49.316306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.316315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.316384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.316394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.926 [2024-10-01 06:12:49.316402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.316411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.316480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.316497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.926 [2024-10-01 06:12:49.316506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.316519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.316537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.316545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.926 [2024-10-01 06:12:49.316554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.316562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.334488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.334538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.926 [2024-10-01 06:12:49.334549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.334558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.347876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.347925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.926 [2024-10-01 06:12:49.347937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.347945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.348032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.348042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.926 [2024-10-01 06:12:49.348054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.348062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.348099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.348109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.926 [2024-10-01 06:12:49.348117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.348124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.348194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.348203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.926 [2024-10-01 06:12:49.348210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.348221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.348247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.348256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:23.926 [2024-10-01 06:12:49.348263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.348271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.348316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.348325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.926 [2024-10-01 06:12:49.348333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.926 [2024-10-01 06:12:49.348343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.926 [2024-10-01 06:12:49.348394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.926 [2024-10-01 06:12:49.348403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.927 [2024-10-01 06:12:49.348413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.927 [2024-10-01 06:12:49.348420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.927 [2024-10-01 06:12:49.348564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.660 ms, result 0 00:20:24.495 00:20:24.495 00:20:24.495 06:12:50 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:24.495 [2024-10-01 06:12:50.073570] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:20:24.495 [2024-10-01 06:12:50.073689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87762 ] 00:20:24.755 [2024-10-01 06:12:50.211596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.755 [2024-10-01 06:12:50.256602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.755 [2024-10-01 06:12:50.356167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.755 [2024-10-01 06:12:50.356228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:25.014 [2024-10-01 06:12:50.503369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.503407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:25.014 [2024-10-01 06:12:50.503420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:25.014 [2024-10-01 06:12:50.503426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.014 [2024-10-01 06:12:50.503462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.503473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.014 [2024-10-01 06:12:50.503479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:25.014 [2024-10-01 06:12:50.503491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.014 [2024-10-01 06:12:50.503507] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:25.014 [2024-10-01 06:12:50.503692] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:25.014 [2024-10-01 06:12:50.503704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.503713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.014 [2024-10-01 06:12:50.503720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:20:25.014 [2024-10-01 06:12:50.503726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.014 [2024-10-01 06:12:50.505099] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:25.014 [2024-10-01 06:12:50.508024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.508053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:25.014 [2024-10-01 06:12:50.508062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:20:25.014 [2024-10-01 06:12:50.508072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.014 [2024-10-01 06:12:50.508119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.508128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:25.014 [2024-10-01 06:12:50.508138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:25.014 [2024-10-01 06:12:50.508143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.014 [2024-10-01 06:12:50.514351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.514497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.014 [2024-10-01 06:12:50.514511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.167 ms 00:20:25.014 [2024-10-01 06:12:50.514517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.014 [2024-10-01 06:12:50.514590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.514599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.014 [2024-10-01 06:12:50.514606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:25.014 [2024-10-01 06:12:50.514612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.014 [2024-10-01 06:12:50.514646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.514653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:25.014 [2024-10-01 06:12:50.514660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:25.014 [2024-10-01 06:12:50.514669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.014 [2024-10-01 06:12:50.514690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:25.014 [2024-10-01 06:12:50.516308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.014 [2024-10-01 06:12:50.516332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.014 [2024-10-01 06:12:50.516339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:20:25.015 [2024-10-01 06:12:50.516345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.015 [2024-10-01 06:12:50.516369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.015 [2024-10-01 06:12:50.516376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:25.015 [2024-10-01 06:12:50.516385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:25.015 [2024-10-01 06:12:50.516391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.015 [2024-10-01 06:12:50.516406] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:25.015 [2024-10-01 06:12:50.516431] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:25.015 [2024-10-01 06:12:50.516467] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:25.015 [2024-10-01 06:12:50.516482] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:25.015 [2024-10-01 06:12:50.516563] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:25.015 [2024-10-01 06:12:50.516575] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:25.015 [2024-10-01 06:12:50.516584] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:25.015 [2024-10-01 06:12:50.516592] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:25.015 [2024-10-01 06:12:50.516605] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:25.015 [2024-10-01 06:12:50.516611] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:25.015 [2024-10-01 06:12:50.516617] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:25.015 [2024-10-01 06:12:50.516622] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:25.015 [2024-10-01 06:12:50.516629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:25.015 [2024-10-01 06:12:50.516635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.015 [2024-10-01 06:12:50.516643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:25.015 [2024-10-01 06:12:50.516649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:20:25.015 [2024-10-01 06:12:50.516655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.015 [2024-10-01 06:12:50.516718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.015 [2024-10-01 06:12:50.516725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:25.015 [2024-10-01 06:12:50.516732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:25.015 [2024-10-01 06:12:50.516738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.015 [2024-10-01 06:12:50.516810] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:25.015 [2024-10-01 06:12:50.516819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:25.015 [2024-10-01 06:12:50.516825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.015 [2024-10-01 06:12:50.516834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.015 [2024-10-01 06:12:50.516857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:25.015 [2024-10-01 06:12:50.516862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:25.015 [2024-10-01 06:12:50.516868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:25.015 [2024-10-01 06:12:50.516875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:25.015 [2024-10-01 06:12:50.516881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:25.015 [2024-10-01 06:12:50.516890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.015 [2024-10-01 06:12:50.516896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:25.015 [2024-10-01 06:12:50.516901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:25.015 [2024-10-01 06:12:50.516906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.015 [2024-10-01 06:12:50.516911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:25.015 [2024-10-01 06:12:50.516916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:25.015 [2024-10-01 06:12:50.516922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.015 [2024-10-01 06:12:50.516929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:25.015 [2024-10-01 06:12:50.516937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:25.015 [2024-10-01 06:12:50.516942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.015 [2024-10-01 06:12:50.516948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:25.015 [2024-10-01 06:12:50.516954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:25.015 [2024-10-01 06:12:50.516961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.015 [2024-10-01 06:12:50.516967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:25.015 [2024-10-01 06:12:50.516980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:25.015 [2024-10-01 06:12:50.516986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.015 [2024-10-01 06:12:50.516994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:25.015 [2024-10-01 06:12:50.517000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:25.015 [2024-10-01 06:12:50.517006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.015 [2024-10-01 06:12:50.517012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:25.015 [2024-10-01 06:12:50.517018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:25.015 [2024-10-01 06:12:50.517024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.015 [2024-10-01 06:12:50.517030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:25.015 [2024-10-01 06:12:50.517036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:25.015 [2024-10-01 06:12:50.517043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.015 [2024-10-01 06:12:50.517049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:25.015 [2024-10-01 06:12:50.517055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:25.015 [2024-10-01 06:12:50.517061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.015 [2024-10-01 06:12:50.517067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:25.015 [2024-10-01 06:12:50.517073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:25.015 [2024-10-01 06:12:50.517079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.015 [2024-10-01 06:12:50.517085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:25.015 [2024-10-01 06:12:50.517093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:25.015 [2024-10-01 06:12:50.517098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.015 [2024-10-01 06:12:50.517104] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:25.015 [2024-10-01 06:12:50.517110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:25.015 [2024-10-01 06:12:50.517118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.015 [2024-10-01 06:12:50.517126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.015 [2024-10-01 06:12:50.517132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:25.015 [2024-10-01 06:12:50.517138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:25.015 [2024-10-01 06:12:50.517145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:25.015 [2024-10-01 06:12:50.517152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:25.015 [2024-10-01 06:12:50.517158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:25.015 [2024-10-01 06:12:50.517165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:25.015 [2024-10-01 06:12:50.517172] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:25.015 [2024-10-01 06:12:50.517180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.015 [2024-10-01 06:12:50.517188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:25.015 [2024-10-01 06:12:50.517195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:25.015 [2024-10-01 06:12:50.517203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:25.015 [2024-10-01 06:12:50.517209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:25.015 [2024-10-01 06:12:50.517216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:25.015 [2024-10-01 06:12:50.517223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:25.015 [2024-10-01 06:12:50.517229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:25.015 [2024-10-01 06:12:50.517236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:25.015 [2024-10-01 06:12:50.517242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:25.015 [2024-10-01 06:12:50.517253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:25.015 [2024-10-01 06:12:50.517259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:25.015 [2024-10-01 06:12:50.517266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:25.015 [2024-10-01 06:12:50.517272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:25.015 [2024-10-01 06:12:50.517279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:25.015 [2024-10-01 06:12:50.517284] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:25.015 [2024-10-01 06:12:50.517315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.015 [2024-10-01 06:12:50.517323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:25.015 [2024-10-01 06:12:50.517329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:25.016 [2024-10-01 06:12:50.517337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:25.016 [2024-10-01 06:12:50.517344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:25.016 [2024-10-01 06:12:50.517352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.517359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:25.016 [2024-10-01 06:12:50.517367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:20:25.016 [2024-10-01 06:12:50.517372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.539826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.539882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.016 [2024-10-01 06:12:50.539895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.413 ms 00:20:25.016 [2024-10-01 06:12:50.539904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.539994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.540003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:25.016 [2024-10-01 06:12:50.540012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:25.016 [2024-10-01 06:12:50.540019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.550877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.551066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.016 [2024-10-01 06:12:50.551087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.801 ms 00:20:25.016 [2024-10-01 06:12:50.551098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.551134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.551149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.016 [2024-10-01 06:12:50.551161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:25.016 [2024-10-01 06:12:50.551170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.551630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.551654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.016 [2024-10-01 06:12:50.551666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:20:25.016 [2024-10-01 06:12:50.551677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.551872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.551887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.016 [2024-10-01 06:12:50.551898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:20:25.016 [2024-10-01 06:12:50.551918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.557702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.557729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.016 [2024-10-01 06:12:50.557740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.756 ms 00:20:25.016 [2024-10-01 06:12:50.557749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.560783] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:25.016 [2024-10-01 06:12:50.560812] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:25.016 [2024-10-01 06:12:50.560821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.560827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:25.016 [2024-10-01 06:12:50.560834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:20:25.016 [2024-10-01 06:12:50.560840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.572300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.572328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:25.016 [2024-10-01 06:12:50.572343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.419 ms 00:20:25.016 [2024-10-01 06:12:50.572349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.574210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.574235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:25.016 [2024-10-01 06:12:50.574242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 00:20:25.016 [2024-10-01 06:12:50.574248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.575915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.575947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:25.016 [2024-10-01 06:12:50.575961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:20:25.016 [2024-10-01 06:12:50.575967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.576223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.576233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:25.016 [2024-10-01 06:12:50.576240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:25.016 [2024-10-01 06:12:50.576245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.592931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.593073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:25.016 [2024-10-01 06:12:50.593092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.673 ms 00:20:25.016 [2024-10-01 06:12:50.593098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.599004] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:25.016 [2024-10-01 06:12:50.601154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.601182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:25.016 [2024-10-01 06:12:50.601192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.026 ms 00:20:25.016 [2024-10-01 06:12:50.601205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.601248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.601256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:25.016 [2024-10-01 06:12:50.601263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:25.016 [2024-10-01 06:12:50.601272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.602662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.602690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:25.016 [2024-10-01 06:12:50.602702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:20:25.016 [2024-10-01 06:12:50.602710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.602736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.602750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:25.016 [2024-10-01 06:12:50.602758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:25.016 [2024-10-01 06:12:50.602763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.602790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:25.016 [2024-10-01 06:12:50.602798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.602805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:25.016 [2024-10-01 06:12:50.602811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:25.016 [2024-10-01 06:12:50.602817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.606589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.606617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:25.016 [2024-10-01 06:12:50.606626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:20:25.016 [2024-10-01 06:12:50.606632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.606690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.016 [2024-10-01 06:12:50.606700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:25.016 [2024-10-01 06:12:50.606708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:25.016 [2024-10-01 06:12:50.606714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.016 [2024-10-01 06:12:50.607837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.100 ms, result 0 00:21:36.822  Copying: 13/1024 [MB] (13 MBps) Copying: 30/1024 [MB] (17 MBps) Copying: 50/1024 [MB] (20 MBps) Copying: 65/1024 [MB] (14 MBps) Copying: 79/1024 [MB] (13 MBps) Copying: 95/1024 [MB] (15 MBps) Copying: 112/1024 [MB] (17 MBps) Copying: 125/1024 [MB] (13 MBps) Copying: 135/1024 [MB] (10 MBps) Copying: 146/1024 [MB] (10 MBps) Copying: 156/1024 [MB] (10 MBps) Copying: 166/1024 [MB] (10 MBps) Copying: 176/1024 [MB] (10 MBps) Copying: 187/1024 [MB] (10 MBps) Copying: 197/1024 [MB] (10 MBps) Copying: 208/1024 [MB] (10 MBps) Copying: 218/1024 [MB] (10 MBps) Copying: 232/1024 [MB] (14 MBps) Copying: 251/1024 [MB] (19 MBps) Copying: 264/1024 [MB] (12 MBps) Copying: 282/1024 [MB] (17 MBps) Copying: 304/1024 [MB] (21 MBps) Copying: 331/1024 [MB] (27 MBps) Copying: 352/1024 [MB] (20 MBps) Copying: 367/1024 [MB] (15 MBps) Copying: 380/1024 [MB] (13 MBps) Copying: 405/1024 [MB] (24 MBps) Copying: 427/1024 [MB] (22 MBps) Copying: 439/1024 [MB] (12 MBps) Copying: 453/1024 [MB] (13 MBps) Copying: 465/1024 [MB] (12 MBps) Copying: 477/1024 [MB] (11 MBps) Copying: 488/1024 [MB] (10 MBps) Copying: 501/1024 [MB] (13 MBps) Copying: 512/1024 [MB] (10 MBps) Copying: 527/1024 [MB] (15 MBps) Copying: 543/1024 [MB] (15 MBps) Copying: 557/1024 [MB] (14 MBps) Copying: 568/1024 [MB] (10 MBps) Copying: 578/1024 [MB] (10 MBps) Copying: 598/1024 [MB] (19 MBps) Copying: 610/1024 [MB] (11 MBps) Copying: 620/1024 [MB] (10 MBps) Copying: 631/1024 [MB] (10 MBps) Copying: 644/1024 [MB] (12 MBps) Copying: 657/1024 [MB] (13 MBps) Copying: 667/1024 [MB] (10 MBps) Copying: 683/1024 [MB] (16 MBps) Copying: 698/1024 [MB] (15 MBps) Copying: 715/1024 [MB] (16 MBps) Copying: 727/1024 [MB] (11 MBps) Copying: 745/1024 [MB] (17 MBps) Copying: 755/1024 [MB] (10 MBps) Copying: 766/1024 [MB] (10 MBps) Copying: 783/1024 [MB] (17 MBps) Copying: 797/1024 [MB] (14 MBps) Copying: 808/1024 [MB] (10 MBps) Copying: 829/1024 [MB] (20 MBps) Copying: 840/1024 [MB] (11 MBps) Copying: 852/1024 [MB] (11 MBps) Copying: 865/1024 [MB] (12 MBps) Copying: 884/1024 [MB] (18 MBps) Copying: 900/1024 [MB] (16 MBps) Copying: 923/1024 [MB] (22 MBps) Copying: 934/1024 [MB] (11 MBps) Copying: 955/1024 [MB] (20 MBps) Copying: 966/1024 [MB] (11 MBps) Copying: 981/1024 [MB] (15 MBps) Copying: 996/1024 [MB] (14 MBps) Copying: 1006/1024 [MB] (10 MBps) Copying: 1017/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-01 06:14:02.426532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.822 [2024-10-01 06:14:02.426625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:36.822 [2024-10-01 06:14:02.426645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:36.822 [2024-10-01 06:14:02.426656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.822 [2024-10-01 06:14:02.426682] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:36.822 [2024-10-01 06:14:02.427672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.822 [2024-10-01 06:14:02.427711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:36.822 [2024-10-01 06:14:02.427736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:21:36.822 [2024-10-01 06:14:02.427746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.822 [2024-10-01 06:14:02.428016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.822 [2024-10-01 06:14:02.428028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:36.822 [2024-10-01 06:14:02.428037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:21:36.822 [2024-10-01 06:14:02.428047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.822 [2024-10-01 06:14:02.434908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.822 [2024-10-01 06:14:02.434961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:36.822 [2024-10-01 06:14:02.434973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.839 ms 00:21:36.822 [2024-10-01 06:14:02.434982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.083 [2024-10-01 06:14:02.441392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.083 [2024-10-01 06:14:02.441444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:37.083 [2024-10-01 06:14:02.441456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.367 ms 00:21:37.083 [2024-10-01 06:14:02.441472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.083 [2024-10-01 06:14:02.444686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.083 [2024-10-01 06:14:02.444902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:37.083 [2024-10-01 06:14:02.444923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:21:37.083 [2024-10-01 06:14:02.444933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.083 [2024-10-01 06:14:02.450662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.084 [2024-10-01 06:14:02.450751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:37.084 [2024-10-01 06:14:02.450772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.372 ms 00:21:37.084 [2024-10-01 06:14:02.450802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.347 [2024-10-01 06:14:02.828715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.347 [2024-10-01 06:14:02.828975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:37.347 [2024-10-01 06:14:02.829003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 377.813 ms 00:21:37.347 [2024-10-01 06:14:02.829014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.347 [2024-10-01 06:14:02.832330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.347 [2024-10-01 06:14:02.832517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:37.347 [2024-10-01 06:14:02.832537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.276 ms 00:21:37.347 [2024-10-01 06:14:02.832546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.347 [2024-10-01 06:14:02.835453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.347 [2024-10-01 06:14:02.835507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:37.347 [2024-10-01 06:14:02.835519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.795 ms 00:21:37.347 [2024-10-01 06:14:02.835528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.347 [2024-10-01 06:14:02.837536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.347 [2024-10-01 06:14:02.837693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:37.347 [2024-10-01 06:14:02.837755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.966 ms 00:21:37.347 [2024-10-01 06:14:02.837778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.347 [2024-10-01 06:14:02.840084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.347 [2024-10-01 06:14:02.840247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:37.347 [2024-10-01 06:14:02.840266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:21:37.347 [2024-10-01 06:14:02.840274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.347 [2024-10-01 06:14:02.840308] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:37.347 [2024-10-01 06:14:02.840328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:37.347 [2024-10-01 06:14:02.840341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:37.347 [2024-10-01 06:14:02.840783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.840992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:37.348 [2024-10-01 06:14:02.841275] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:37.348 [2024-10-01 06:14:02.841286] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88a7bfd7-7279-4462-a23e-993b19e95361 00:21:37.348 [2024-10-01 06:14:02.841295] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:37.348 [2024-10-01 06:14:02.841306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30400 00:21:37.348 [2024-10-01 06:14:02.841320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29440 00:21:37.348 [2024-10-01 06:14:02.841341] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0326 00:21:37.348 [2024-10-01 06:14:02.841349] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:37.348 [2024-10-01 06:14:02.841357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:37.348 [2024-10-01 06:14:02.841367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:37.348 [2024-10-01 06:14:02.841375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:37.348 [2024-10-01 06:14:02.841382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:37.348 [2024-10-01 06:14:02.841390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.348 [2024-10-01 06:14:02.841400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:37.348 [2024-10-01 06:14:02.841409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:21:37.348 [2024-10-01 06:14:02.841417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.844563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.348 [2024-10-01 06:14:02.844597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:37.348 [2024-10-01 06:14:02.844618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:21:37.348 [2024-10-01 06:14:02.844634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.844796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.348 [2024-10-01 06:14:02.844807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:37.348 [2024-10-01 06:14:02.844818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:21:37.348 [2024-10-01 06:14:02.844826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.854233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.348 [2024-10-01 06:14:02.854283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.348 [2024-10-01 06:14:02.854296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.348 [2024-10-01 06:14:02.854305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.854378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.348 [2024-10-01 06:14:02.854388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.348 [2024-10-01 06:14:02.854397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.348 [2024-10-01 06:14:02.854406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.854456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.348 [2024-10-01 06:14:02.854475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.348 [2024-10-01 06:14:02.854485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.348 [2024-10-01 06:14:02.854493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.854514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.348 [2024-10-01 06:14:02.854524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.348 [2024-10-01 06:14:02.854533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.348 [2024-10-01 06:14:02.854542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.873931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.348 [2024-10-01 06:14:02.874165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.348 [2024-10-01 06:14:02.874187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.348 [2024-10-01 06:14:02.874197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.889084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.348 [2024-10-01 06:14:02.889323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.348 [2024-10-01 06:14:02.889343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.348 [2024-10-01 06:14:02.889353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.889457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.348 [2024-10-01 06:14:02.889469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.348 [2024-10-01 06:14:02.889483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.348 [2024-10-01 06:14:02.889492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.348 [2024-10-01 06:14:02.889536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.348 [2024-10-01 06:14:02.889546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.348 [2024-10-01 06:14:02.889556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.348 [2024-10-01 06:14:02.889564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.349 [2024-10-01 06:14:02.889653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.349 [2024-10-01 06:14:02.889664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.349 [2024-10-01 06:14:02.889675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.349 [2024-10-01 06:14:02.889689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.349 [2024-10-01 06:14:02.889729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.349 [2024-10-01 06:14:02.889740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:37.349 [2024-10-01 06:14:02.889749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.349 [2024-10-01 06:14:02.889757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.349 [2024-10-01 06:14:02.889808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.349 [2024-10-01 06:14:02.889818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.349 [2024-10-01 06:14:02.889828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.349 [2024-10-01 06:14:02.889839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.349 [2024-10-01 06:14:02.889928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.349 [2024-10-01 06:14:02.889941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.349 [2024-10-01 06:14:02.889950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.349 [2024-10-01 06:14:02.889959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.349 [2024-10-01 06:14:02.890122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.546 ms, result 0 00:21:37.610 00:21:37.610 00:21:37.610 06:14:03 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:40.158 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:40.158 Process with pid 85574 is not found 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 85574 00:21:40.158 06:14:05 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 85574 ']' 00:21:40.158 06:14:05 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 85574 00:21:40.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85574) - No such process 00:21:40.158 06:14:05 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 85574 is not found' 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:40.158 Remove shared memory files 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:40.158 06:14:05 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:40.158 ************************************ 00:21:40.158 END TEST ftl_restore 00:21:40.158 ************************************ 00:21:40.158 00:21:40.158 real 4m44.341s 00:21:40.158 user 4m31.117s 00:21:40.158 sys 0m13.269s 00:21:40.158 06:14:05 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.158 06:14:05 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:40.158 06:14:05 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:40.158 06:14:05 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:40.158 06:14:05 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.158 06:14:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:40.158 ************************************ 00:21:40.158 START TEST ftl_dirty_shutdown 00:21:40.158 ************************************ 00:21:40.158 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:40.420 * Looking for test storage... 00:21:40.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:40.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.420 --rc genhtml_branch_coverage=1 00:21:40.420 --rc genhtml_function_coverage=1 00:21:40.420 --rc genhtml_legend=1 00:21:40.420 --rc geninfo_all_blocks=1 00:21:40.420 --rc geninfo_unexecuted_blocks=1 00:21:40.420 00:21:40.420 ' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:40.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.420 --rc genhtml_branch_coverage=1 00:21:40.420 --rc genhtml_function_coverage=1 00:21:40.420 --rc genhtml_legend=1 00:21:40.420 --rc geninfo_all_blocks=1 00:21:40.420 --rc geninfo_unexecuted_blocks=1 00:21:40.420 00:21:40.420 ' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:40.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.420 --rc genhtml_branch_coverage=1 00:21:40.420 --rc genhtml_function_coverage=1 00:21:40.420 --rc genhtml_legend=1 00:21:40.420 --rc geninfo_all_blocks=1 00:21:40.420 --rc geninfo_unexecuted_blocks=1 00:21:40.420 00:21:40.420 ' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:40.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.420 --rc genhtml_branch_coverage=1 00:21:40.420 --rc genhtml_function_coverage=1 00:21:40.420 --rc genhtml_legend=1 00:21:40.420 --rc geninfo_all_blocks=1 00:21:40.420 --rc geninfo_unexecuted_blocks=1 00:21:40.420 00:21:40.420 ' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88605 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88605 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 88605 ']' 00:21:40.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.420 06:14:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:40.420 [2024-10-01 06:14:06.004982] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:40.420 [2024-10-01 06:14:06.005147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88605 ] 00:21:40.682 [2024-10-01 06:14:06.142753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.682 [2024-10-01 06:14:06.216883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.254 06:14:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.254 06:14:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:41.254 06:14:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:41.254 06:14:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:41.254 06:14:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:41.254 06:14:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:41.254 06:14:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:41.254 06:14:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:41.829 { 00:21:41.829 "name": "nvme0n1", 00:21:41.829 "aliases": [ 00:21:41.829 "6e0361bf-32ee-4277-bb77-3ba472cf9018" 00:21:41.829 ], 00:21:41.829 "product_name": "NVMe disk", 00:21:41.829 "block_size": 4096, 00:21:41.829 "num_blocks": 1310720, 00:21:41.829 "uuid": "6e0361bf-32ee-4277-bb77-3ba472cf9018", 00:21:41.829 "numa_id": -1, 00:21:41.829 "assigned_rate_limits": { 00:21:41.829 "rw_ios_per_sec": 0, 00:21:41.829 "rw_mbytes_per_sec": 0, 00:21:41.829 "r_mbytes_per_sec": 0, 00:21:41.829 "w_mbytes_per_sec": 0 00:21:41.829 }, 00:21:41.829 "claimed": true, 00:21:41.829 "claim_type": "read_many_write_one", 00:21:41.829 "zoned": false, 00:21:41.829 "supported_io_types": { 00:21:41.829 "read": true, 00:21:41.829 "write": true, 00:21:41.829 "unmap": true, 00:21:41.829 "flush": true, 00:21:41.829 "reset": true, 00:21:41.829 "nvme_admin": true, 00:21:41.829 "nvme_io": true, 00:21:41.829 "nvme_io_md": false, 00:21:41.829 "write_zeroes": true, 00:21:41.829 "zcopy": false, 00:21:41.829 "get_zone_info": false, 00:21:41.829 "zone_management": false, 00:21:41.829 "zone_append": false, 00:21:41.829 "compare": true, 00:21:41.829 "compare_and_write": false, 00:21:41.829 "abort": true, 00:21:41.829 "seek_hole": false, 00:21:41.829 "seek_data": false, 00:21:41.829 "copy": true, 00:21:41.829 "nvme_iov_md": false 00:21:41.829 }, 00:21:41.829 "driver_specific": { 00:21:41.829 "nvme": [ 00:21:41.829 { 00:21:41.829 "pci_address": "0000:00:11.0", 00:21:41.829 "trid": { 00:21:41.829 "trtype": "PCIe", 00:21:41.829 "traddr": "0000:00:11.0" 00:21:41.829 }, 00:21:41.829 "ctrlr_data": { 00:21:41.829 "cntlid": 0, 00:21:41.829 "vendor_id": "0x1b36", 00:21:41.829 "model_number": "QEMU NVMe Ctrl", 00:21:41.829 "serial_number": "12341", 00:21:41.829 "firmware_revision": "8.0.0", 00:21:41.829 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:41.829 "oacs": { 00:21:41.829 "security": 0, 00:21:41.829 "format": 1, 00:21:41.829 "firmware": 0, 00:21:41.829 "ns_manage": 1 00:21:41.829 }, 00:21:41.829 "multi_ctrlr": false, 00:21:41.829 "ana_reporting": false 00:21:41.829 }, 00:21:41.829 "vs": { 00:21:41.829 "nvme_version": "1.4" 00:21:41.829 }, 00:21:41.829 "ns_data": { 00:21:41.829 "id": 1, 00:21:41.829 "can_share": false 00:21:41.829 } 00:21:41.829 } 00:21:41.829 ], 00:21:41.829 "mp_policy": "active_passive" 00:21:41.829 } 00:21:41.829 } 00:21:41.829 ]' 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:41.829 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=61922f39-1daf-4f94-8eef-8cc5826d8a61 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:42.091 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61922f39-1daf-4f94-8eef-8cc5826d8a61 00:21:42.351 06:14:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:42.609 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=71ca2324-d035-4d48-a1fd-604e5a575603 00:21:42.609 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 71ca2324-d035-4d48-a1fd-604e5a575603 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=42390fee-cda7-4317-a594-56030d03a14a 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 42390fee-cda7-4317-a594-56030d03a14a 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=42390fee-cda7-4317-a594-56030d03a14a 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 42390fee-cda7-4317-a594-56030d03a14a 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=42390fee-cda7-4317-a594-56030d03a14a 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:42.867 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42390fee-cda7-4317-a594-56030d03a14a 00:21:43.125 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:43.125 { 00:21:43.125 "name": "42390fee-cda7-4317-a594-56030d03a14a", 00:21:43.125 "aliases": [ 00:21:43.125 "lvs/nvme0n1p0" 00:21:43.125 ], 00:21:43.125 "product_name": "Logical Volume", 00:21:43.125 "block_size": 4096, 00:21:43.125 "num_blocks": 26476544, 00:21:43.125 "uuid": "42390fee-cda7-4317-a594-56030d03a14a", 00:21:43.125 "assigned_rate_limits": { 00:21:43.125 "rw_ios_per_sec": 0, 00:21:43.125 "rw_mbytes_per_sec": 0, 00:21:43.125 "r_mbytes_per_sec": 0, 00:21:43.125 "w_mbytes_per_sec": 0 00:21:43.125 }, 00:21:43.125 "claimed": false, 00:21:43.125 "zoned": false, 00:21:43.125 "supported_io_types": { 00:21:43.125 "read": true, 00:21:43.125 "write": true, 00:21:43.125 "unmap": true, 00:21:43.125 "flush": false, 00:21:43.125 "reset": true, 00:21:43.125 "nvme_admin": false, 00:21:43.125 "nvme_io": false, 00:21:43.125 "nvme_io_md": false, 00:21:43.125 "write_zeroes": true, 00:21:43.125 "zcopy": false, 00:21:43.125 "get_zone_info": false, 00:21:43.125 "zone_management": false, 00:21:43.125 "zone_append": false, 00:21:43.125 "compare": false, 00:21:43.125 "compare_and_write": false, 00:21:43.125 "abort": false, 00:21:43.125 "seek_hole": true, 00:21:43.125 "seek_data": true, 00:21:43.125 "copy": false, 00:21:43.125 "nvme_iov_md": false 00:21:43.125 }, 00:21:43.125 "driver_specific": { 00:21:43.125 "lvol": { 00:21:43.125 "lvol_store_uuid": "71ca2324-d035-4d48-a1fd-604e5a575603", 00:21:43.125 "base_bdev": "nvme0n1", 00:21:43.125 "thin_provision": true, 00:21:43.125 "num_allocated_clusters": 0, 00:21:43.125 "snapshot": false, 00:21:43.125 "clone": false, 00:21:43.125 "esnap_clone": false 00:21:43.125 } 00:21:43.126 } 00:21:43.126 } 00:21:43.126 ]' 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:43.126 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:43.384 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:43.384 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:43.384 06:14:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 42390fee-cda7-4317-a594-56030d03a14a 00:21:43.384 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=42390fee-cda7-4317-a594-56030d03a14a 00:21:43.384 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:43.384 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:43.384 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:43.384 06:14:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42390fee-cda7-4317-a594-56030d03a14a 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:43.647 { 00:21:43.647 "name": "42390fee-cda7-4317-a594-56030d03a14a", 00:21:43.647 "aliases": [ 00:21:43.647 "lvs/nvme0n1p0" 00:21:43.647 ], 00:21:43.647 "product_name": "Logical Volume", 00:21:43.647 "block_size": 4096, 00:21:43.647 "num_blocks": 26476544, 00:21:43.647 "uuid": "42390fee-cda7-4317-a594-56030d03a14a", 00:21:43.647 "assigned_rate_limits": { 00:21:43.647 "rw_ios_per_sec": 0, 00:21:43.647 "rw_mbytes_per_sec": 0, 00:21:43.647 "r_mbytes_per_sec": 0, 00:21:43.647 "w_mbytes_per_sec": 0 00:21:43.647 }, 00:21:43.647 "claimed": false, 00:21:43.647 "zoned": false, 00:21:43.647 "supported_io_types": { 00:21:43.647 "read": true, 00:21:43.647 "write": true, 00:21:43.647 "unmap": true, 00:21:43.647 "flush": false, 00:21:43.647 "reset": true, 00:21:43.647 "nvme_admin": false, 00:21:43.647 "nvme_io": false, 00:21:43.647 "nvme_io_md": false, 00:21:43.647 "write_zeroes": true, 00:21:43.647 "zcopy": false, 00:21:43.647 "get_zone_info": false, 00:21:43.647 "zone_management": false, 00:21:43.647 "zone_append": false, 00:21:43.647 "compare": false, 00:21:43.647 "compare_and_write": false, 00:21:43.647 "abort": false, 00:21:43.647 "seek_hole": true, 00:21:43.647 "seek_data": true, 00:21:43.647 "copy": false, 00:21:43.647 "nvme_iov_md": false 00:21:43.647 }, 00:21:43.647 "driver_specific": { 00:21:43.647 "lvol": { 00:21:43.647 "lvol_store_uuid": "71ca2324-d035-4d48-a1fd-604e5a575603", 00:21:43.647 "base_bdev": "nvme0n1", 00:21:43.647 "thin_provision": true, 00:21:43.647 "num_allocated_clusters": 0, 00:21:43.647 "snapshot": false, 00:21:43.647 "clone": false, 00:21:43.647 "esnap_clone": false 00:21:43.647 } 00:21:43.647 } 00:21:43.647 } 00:21:43.647 ]' 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:43.647 06:14:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:43.907 06:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:43.907 06:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 42390fee-cda7-4317-a594-56030d03a14a 00:21:43.907 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=42390fee-cda7-4317-a594-56030d03a14a 00:21:43.907 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:43.908 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:43.908 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:43.908 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42390fee-cda7-4317-a594-56030d03a14a 00:21:43.908 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:43.908 { 00:21:43.908 "name": "42390fee-cda7-4317-a594-56030d03a14a", 00:21:43.908 "aliases": [ 00:21:43.908 "lvs/nvme0n1p0" 00:21:43.908 ], 00:21:43.908 "product_name": "Logical Volume", 00:21:43.908 "block_size": 4096, 00:21:43.908 "num_blocks": 26476544, 00:21:43.908 "uuid": "42390fee-cda7-4317-a594-56030d03a14a", 00:21:43.908 "assigned_rate_limits": { 00:21:43.908 "rw_ios_per_sec": 0, 00:21:43.908 "rw_mbytes_per_sec": 0, 00:21:43.908 "r_mbytes_per_sec": 0, 00:21:43.908 "w_mbytes_per_sec": 0 00:21:43.908 }, 00:21:43.908 "claimed": false, 00:21:43.908 "zoned": false, 00:21:43.908 "supported_io_types": { 00:21:43.908 "read": true, 00:21:43.908 "write": true, 00:21:43.908 "unmap": true, 00:21:43.908 "flush": false, 00:21:43.908 "reset": true, 00:21:43.908 "nvme_admin": false, 00:21:43.908 "nvme_io": false, 00:21:43.908 "nvme_io_md": false, 00:21:43.908 "write_zeroes": true, 00:21:43.908 "zcopy": false, 00:21:43.908 "get_zone_info": false, 00:21:43.908 "zone_management": false, 00:21:43.908 "zone_append": false, 00:21:43.908 "compare": false, 00:21:43.908 "compare_and_write": false, 00:21:43.908 "abort": false, 00:21:43.908 "seek_hole": true, 00:21:43.908 "seek_data": true, 00:21:43.908 "copy": false, 00:21:43.908 "nvme_iov_md": false 00:21:43.908 }, 00:21:43.908 "driver_specific": { 00:21:43.908 "lvol": { 00:21:43.908 "lvol_store_uuid": "71ca2324-d035-4d48-a1fd-604e5a575603", 00:21:43.908 "base_bdev": "nvme0n1", 00:21:43.908 "thin_provision": true, 00:21:43.908 "num_allocated_clusters": 0, 00:21:43.908 "snapshot": false, 00:21:43.908 "clone": false, 00:21:43.908 "esnap_clone": false 00:21:43.908 } 00:21:43.908 } 00:21:43.908 } 00:21:43.908 ]' 00:21:43.908 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 42390fee-cda7-4317-a594-56030d03a14a --l2p_dram_limit 10' 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:44.167 06:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 42390fee-cda7-4317-a594-56030d03a14a --l2p_dram_limit 10 -c nvc0n1p0 00:21:44.167 [2024-10-01 06:14:09.750493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.167 [2024-10-01 06:14:09.750541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:44.167 [2024-10-01 06:14:09.750554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:44.167 [2024-10-01 06:14:09.750562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.167 [2024-10-01 06:14:09.750604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.167 [2024-10-01 06:14:09.750614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:44.167 [2024-10-01 06:14:09.750620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:44.167 [2024-10-01 06:14:09.750630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.167 [2024-10-01 06:14:09.750649] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:44.167 [2024-10-01 06:14:09.750830] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:44.167 [2024-10-01 06:14:09.750842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.167 [2024-10-01 06:14:09.750867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:44.167 [2024-10-01 06:14:09.750875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:21:44.167 [2024-10-01 06:14:09.750883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.167 [2024-10-01 06:14:09.750906] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 329e6eb1-d2e9-4e44-aeda-5abbabe02bd5 00:21:44.167 [2024-10-01 06:14:09.752180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.167 [2024-10-01 06:14:09.752203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:44.167 [2024-10-01 06:14:09.752216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:44.167 [2024-10-01 06:14:09.752222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.167 [2024-10-01 06:14:09.759199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.167 [2024-10-01 06:14:09.759227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:44.167 [2024-10-01 06:14:09.759237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.908 ms 00:21:44.167 [2024-10-01 06:14:09.759248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.167 [2024-10-01 06:14:09.759316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.167 [2024-10-01 06:14:09.759324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:44.167 [2024-10-01 06:14:09.759331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:44.167 [2024-10-01 06:14:09.759340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.167 [2024-10-01 06:14:09.759372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.167 [2024-10-01 06:14:09.759379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:44.167 [2024-10-01 06:14:09.759388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:44.167 [2024-10-01 06:14:09.759394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.167 [2024-10-01 06:14:09.759417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:44.167 [2024-10-01 06:14:09.761077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.167 [2024-10-01 06:14:09.761104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:44.167 [2024-10-01 06:14:09.761115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:21:44.167 [2024-10-01 06:14:09.761127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.168 [2024-10-01 06:14:09.761154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.168 [2024-10-01 06:14:09.761165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:44.168 [2024-10-01 06:14:09.761172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:44.168 [2024-10-01 06:14:09.761181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.168 [2024-10-01 06:14:09.761195] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:44.168 [2024-10-01 06:14:09.761323] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:44.168 [2024-10-01 06:14:09.761334] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:44.168 [2024-10-01 06:14:09.761348] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:44.168 [2024-10-01 06:14:09.761356] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761367] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761373] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:44.168 [2024-10-01 06:14:09.761383] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:44.168 [2024-10-01 06:14:09.761390] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:44.168 [2024-10-01 06:14:09.761397] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:44.168 [2024-10-01 06:14:09.761405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.168 [2024-10-01 06:14:09.761413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:44.168 [2024-10-01 06:14:09.761418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:44.168 [2024-10-01 06:14:09.761428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.168 [2024-10-01 06:14:09.761493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.168 [2024-10-01 06:14:09.761503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:44.168 [2024-10-01 06:14:09.761509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:44.168 [2024-10-01 06:14:09.761515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.168 [2024-10-01 06:14:09.761591] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:44.168 [2024-10-01 06:14:09.761602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:44.168 [2024-10-01 06:14:09.761609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:44.168 [2024-10-01 06:14:09.761632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:44.168 [2024-10-01 06:14:09.761649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:44.168 [2024-10-01 06:14:09.761660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:44.168 [2024-10-01 06:14:09.761667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:44.168 [2024-10-01 06:14:09.761672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:44.168 [2024-10-01 06:14:09.761680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:44.168 [2024-10-01 06:14:09.761686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:44.168 [2024-10-01 06:14:09.761693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:44.168 [2024-10-01 06:14:09.761707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:44.168 [2024-10-01 06:14:09.761724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:44.168 [2024-10-01 06:14:09.761746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:44.168 [2024-10-01 06:14:09.761765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:44.168 [2024-10-01 06:14:09.761789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:44.168 [2024-10-01 06:14:09.761812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:44.168 [2024-10-01 06:14:09.761826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:44.168 [2024-10-01 06:14:09.761834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:44.168 [2024-10-01 06:14:09.761840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:44.168 [2024-10-01 06:14:09.761859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:44.168 [2024-10-01 06:14:09.761865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:44.168 [2024-10-01 06:14:09.761873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:44.168 [2024-10-01 06:14:09.761886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:44.168 [2024-10-01 06:14:09.761892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761900] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:44.168 [2024-10-01 06:14:09.761910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:44.168 [2024-10-01 06:14:09.761920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.168 [2024-10-01 06:14:09.761935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:44.168 [2024-10-01 06:14:09.761942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:44.168 [2024-10-01 06:14:09.761949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:44.168 [2024-10-01 06:14:09.761956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:44.168 [2024-10-01 06:14:09.761966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:44.168 [2024-10-01 06:14:09.761972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:44.168 [2024-10-01 06:14:09.761982] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:44.168 [2024-10-01 06:14:09.761991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:44.168 [2024-10-01 06:14:09.762000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:44.168 [2024-10-01 06:14:09.762007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:44.168 [2024-10-01 06:14:09.762016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:44.168 [2024-10-01 06:14:09.762023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:44.168 [2024-10-01 06:14:09.762031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:44.168 [2024-10-01 06:14:09.762038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:44.168 [2024-10-01 06:14:09.762047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:44.168 [2024-10-01 06:14:09.762055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:44.168 [2024-10-01 06:14:09.762063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:44.168 [2024-10-01 06:14:09.762069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:44.168 [2024-10-01 06:14:09.762077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:44.168 [2024-10-01 06:14:09.762084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:44.168 [2024-10-01 06:14:09.762092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:44.168 [2024-10-01 06:14:09.762099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:44.168 [2024-10-01 06:14:09.762107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:44.168 [2024-10-01 06:14:09.762116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:44.168 [2024-10-01 06:14:09.762124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:44.168 [2024-10-01 06:14:09.762130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:44.168 [2024-10-01 06:14:09.762138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:44.169 [2024-10-01 06:14:09.762144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:44.169 [2024-10-01 06:14:09.762152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.169 [2024-10-01 06:14:09.762158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:44.169 [2024-10-01 06:14:09.762167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:21:44.169 [2024-10-01 06:14:09.762173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.169 [2024-10-01 06:14:09.762210] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:44.169 [2024-10-01 06:14:09.762219] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:47.460 [2024-10-01 06:14:12.615204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.460 [2024-10-01 06:14:12.615314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:47.460 [2024-10-01 06:14:12.615342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2852.973 ms 00:21:47.460 [2024-10-01 06:14:12.615352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.460 [2024-10-01 06:14:12.634792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.460 [2024-10-01 06:14:12.634866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:47.460 [2024-10-01 06:14:12.634885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.285 ms 00:21:47.460 [2024-10-01 06:14:12.634895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.460 [2024-10-01 06:14:12.635029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.460 [2024-10-01 06:14:12.635043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:47.460 [2024-10-01 06:14:12.635066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:47.460 [2024-10-01 06:14:12.635075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.460 [2024-10-01 06:14:12.650949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.460 [2024-10-01 06:14:12.651008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:47.460 [2024-10-01 06:14:12.651024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.799 ms 00:21:47.460 [2024-10-01 06:14:12.651034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.460 [2024-10-01 06:14:12.651084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.460 [2024-10-01 06:14:12.651094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:47.460 [2024-10-01 06:14:12.651110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:47.460 [2024-10-01 06:14:12.651118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.460 [2024-10-01 06:14:12.651827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.460 [2024-10-01 06:14:12.651890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:47.460 [2024-10-01 06:14:12.651907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:21:47.460 [2024-10-01 06:14:12.651917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.460 [2024-10-01 06:14:12.652060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.460 [2024-10-01 06:14:12.652071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:47.460 [2024-10-01 06:14:12.652083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:21:47.460 [2024-10-01 06:14:12.652103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.460 [2024-10-01 06:14:12.677325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.460 [2024-10-01 06:14:12.677397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:47.460 [2024-10-01 06:14:12.677416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.188 ms 00:21:47.460 [2024-10-01 06:14:12.677428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.460 [2024-10-01 06:14:12.689446] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:47.461 [2024-10-01 06:14:12.694407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.694459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:47.461 [2024-10-01 06:14:12.694472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.845 ms 00:21:47.461 [2024-10-01 06:14:12.694483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.782903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.782975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:47.461 [2024-10-01 06:14:12.782991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.381 ms 00:21:47.461 [2024-10-01 06:14:12.783006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.783247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.783262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:47.461 [2024-10-01 06:14:12.783271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:21:47.461 [2024-10-01 06:14:12.783283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.789426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.789485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:47.461 [2024-10-01 06:14:12.789498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.103 ms 00:21:47.461 [2024-10-01 06:14:12.789509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.794753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.794810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:47.461 [2024-10-01 06:14:12.794822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.180 ms 00:21:47.461 [2024-10-01 06:14:12.794832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.795250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.795273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:47.461 [2024-10-01 06:14:12.795284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:21:47.461 [2024-10-01 06:14:12.795297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.838685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.838745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:47.461 [2024-10-01 06:14:12.838760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.349 ms 00:21:47.461 [2024-10-01 06:14:12.838773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.846956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.847012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:47.461 [2024-10-01 06:14:12.847024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.118 ms 00:21:47.461 [2024-10-01 06:14:12.847037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.852997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.853049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:47.461 [2024-10-01 06:14:12.853060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.909 ms 00:21:47.461 [2024-10-01 06:14:12.853071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.859833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.859917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:47.461 [2024-10-01 06:14:12.859928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.715 ms 00:21:47.461 [2024-10-01 06:14:12.859942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.859997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.860011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:47.461 [2024-10-01 06:14:12.860021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:47.461 [2024-10-01 06:14:12.860033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.860137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.461 [2024-10-01 06:14:12.860152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:47.461 [2024-10-01 06:14:12.860160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:47.461 [2024-10-01 06:14:12.860171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.461 [2024-10-01 06:14:12.861541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3110.448 ms, result 0 00:21:47.461 { 00:21:47.461 "name": "ftl0", 00:21:47.461 "uuid": "329e6eb1-d2e9-4e44-aeda-5abbabe02bd5" 00:21:47.461 } 00:21:47.461 06:14:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:47.461 06:14:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:47.722 06:14:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:47.722 06:14:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:47.722 06:14:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:47.722 /dev/nbd0 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:47.983 1+0 records in 00:21:47.983 1+0 records out 00:21:47.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048585 s, 8.4 MB/s 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:21:47.983 06:14:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:47.983 [2024-10-01 06:14:13.437258] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:47.983 [2024-10-01 06:14:13.437407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88749 ] 00:21:47.983 [2024-10-01 06:14:13.581095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.243 [2024-10-01 06:14:13.631781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.954  Copying: 188/1024 [MB] (188 MBps) Copying: 378/1024 [MB] (189 MBps) Copying: 611/1024 [MB] (233 MBps) Copying: 871/1024 [MB] (259 MBps) Copying: 1024/1024 [MB] (average 223 MBps) 00:21:52.954 00:21:52.954 06:14:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:54.858 06:14:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:54.858 [2024-10-01 06:14:20.444433] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:54.858 [2024-10-01 06:14:20.444527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88826 ] 00:21:55.116 [2024-10-01 06:14:20.575307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.116 [2024-10-01 06:14:20.603839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.897  Copying: 16/1024 [MB] (16 MBps) Copying: 33/1024 [MB] (16 MBps) Copying: 48/1024 [MB] (15 MBps) Copying: 58/1024 [MB] (10 MBps) Copying: 76/1024 [MB] (17 MBps) Copying: 92/1024 [MB] (16 MBps) Copying: 107/1024 [MB] (14 MBps) Copying: 122/1024 [MB] (15 MBps) Copying: 156/1024 [MB] (33 MBps) Copying: 174/1024 [MB] (18 MBps) Copying: 193/1024 [MB] (19 MBps) Copying: 228/1024 [MB] (35 MBps) Copying: 251/1024 [MB] (22 MBps) Copying: 269/1024 [MB] (18 MBps) Copying: 287/1024 [MB] (18 MBps) Copying: 314/1024 [MB] (26 MBps) Copying: 344/1024 [MB] (30 MBps) Copying: 359/1024 [MB] (15 MBps) Copying: 376/1024 [MB] (16 MBps) Copying: 395/1024 [MB] (19 MBps) Copying: 415/1024 [MB] (20 MBps) Copying: 431/1024 [MB] (15 MBps) Copying: 448/1024 [MB] (17 MBps) Copying: 463/1024 [MB] (15 MBps) Copying: 474/1024 [MB] (10 MBps) Copying: 487/1024 [MB] (12 MBps) Copying: 501/1024 [MB] (13 MBps) Copying: 517/1024 [MB] (15 MBps) Copying: 530/1024 [MB] (13 MBps) Copying: 544/1024 [MB] (13 MBps) Copying: 561/1024 [MB] (17 MBps) Copying: 573/1024 [MB] (11 MBps) Copying: 589/1024 [MB] (16 MBps) Copying: 610/1024 [MB] (20 MBps) Copying: 624/1024 [MB] (14 MBps) Copying: 638/1024 [MB] (14 MBps) Copying: 656/1024 [MB] (17 MBps) Copying: 671/1024 [MB] (15 MBps) Copying: 691/1024 [MB] (19 MBps) Copying: 712/1024 [MB] (21 MBps) Copying: 724/1024 [MB] (11 MBps) Copying: 742/1024 [MB] (18 MBps) Copying: 758/1024 [MB] (15 MBps) Copying: 769/1024 [MB] (11 MBps) Copying: 780/1024 [MB] (11 MBps) Copying: 792/1024 [MB] (11 MBps) Copying: 804/1024 [MB] (11 MBps) Copying: 816/1024 [MB] (12 MBps) Copying: 837/1024 [MB] (20 MBps) Copying: 860/1024 [MB] (23 MBps) Copying: 889/1024 [MB] (28 MBps) Copying: 909/1024 [MB] (19 MBps) Copying: 928/1024 [MB] (19 MBps) Copying: 939/1024 [MB] (10 MBps) Copying: 953/1024 [MB] (14 MBps) Copying: 967/1024 [MB] (13 MBps) Copying: 984/1024 [MB] (16 MBps) Copying: 997/1024 [MB] (13 MBps) Copying: 1014/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 17 MBps) 00:22:54.897 00:22:54.897 06:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:54.897 06:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:55.159 06:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:55.159 [2024-10-01 06:15:20.760104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.159 [2024-10-01 06:15:20.760155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:55.159 [2024-10-01 06:15:20.760170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:55.159 [2024-10-01 06:15:20.760178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.159 [2024-10-01 06:15:20.760205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:55.159 [2024-10-01 06:15:20.760651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.159 [2024-10-01 06:15:20.760685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:55.159 [2024-10-01 06:15:20.760695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:22:55.159 [2024-10-01 06:15:20.760706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.159 [2024-10-01 06:15:20.763505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.159 [2024-10-01 06:15:20.763541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:55.159 [2024-10-01 06:15:20.763551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:22:55.159 [2024-10-01 06:15:20.763561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.421 [2024-10-01 06:15:20.783478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.783516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:55.422 [2024-10-01 06:15:20.783527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.900 ms 00:22:55.422 [2024-10-01 06:15:20.783536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.789781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.789814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:55.422 [2024-10-01 06:15:20.789824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:22:55.422 [2024-10-01 06:15:20.789835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.791752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.791790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:55.422 [2024-10-01 06:15:20.791799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.838 ms 00:22:55.422 [2024-10-01 06:15:20.791808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.796604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.796640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:55.422 [2024-10-01 06:15:20.796650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.764 ms 00:22:55.422 [2024-10-01 06:15:20.796665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.796782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.796794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:55.422 [2024-10-01 06:15:20.796803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:55.422 [2024-10-01 06:15:20.796811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.799606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.799641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:55.422 [2024-10-01 06:15:20.799652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:22:55.422 [2024-10-01 06:15:20.799662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.801697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.801742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:55.422 [2024-10-01 06:15:20.801752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:22:55.422 [2024-10-01 06:15:20.801762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.803419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.803456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:55.422 [2024-10-01 06:15:20.803465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:22:55.422 [2024-10-01 06:15:20.803473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.805071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.805105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:55.422 [2024-10-01 06:15:20.805114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:22:55.422 [2024-10-01 06:15:20.805123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.805165] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:55.422 [2024-10-01 06:15:20.805183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.805994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.806003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.806018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.806026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.806034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.806044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.806052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:55.422 [2024-10-01 06:15:20.806070] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:55.422 [2024-10-01 06:15:20.806078] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 329e6eb1-d2e9-4e44-aeda-5abbabe02bd5 00:22:55.422 [2024-10-01 06:15:20.806088] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:55.422 [2024-10-01 06:15:20.806097] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:55.422 [2024-10-01 06:15:20.806107] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:55.422 [2024-10-01 06:15:20.806114] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:55.422 [2024-10-01 06:15:20.806123] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:55.422 [2024-10-01 06:15:20.806132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:55.422 [2024-10-01 06:15:20.806146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:55.422 [2024-10-01 06:15:20.806152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:55.422 [2024-10-01 06:15:20.806159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:55.422 [2024-10-01 06:15:20.806166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.806175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:55.422 [2024-10-01 06:15:20.806184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:22:55.422 [2024-10-01 06:15:20.806192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.807696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.807732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:55.422 [2024-10-01 06:15:20.807741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:22:55.422 [2024-10-01 06:15:20.807751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.807840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.422 [2024-10-01 06:15:20.807868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:55.422 [2024-10-01 06:15:20.807877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:55.422 [2024-10-01 06:15:20.807886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.813079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.422 [2024-10-01 06:15:20.813114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.422 [2024-10-01 06:15:20.813123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.422 [2024-10-01 06:15:20.813149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.813202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.422 [2024-10-01 06:15:20.813214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.422 [2024-10-01 06:15:20.813221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.422 [2024-10-01 06:15:20.813230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.813298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.422 [2024-10-01 06:15:20.813311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.422 [2024-10-01 06:15:20.813319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.422 [2024-10-01 06:15:20.813327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.813344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.422 [2024-10-01 06:15:20.813355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.422 [2024-10-01 06:15:20.813362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.422 [2024-10-01 06:15:20.813374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.822113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.422 [2024-10-01 06:15:20.822155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.422 [2024-10-01 06:15:20.822166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.422 [2024-10-01 06:15:20.822175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.829710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.422 [2024-10-01 06:15:20.829752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.422 [2024-10-01 06:15:20.829766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.422 [2024-10-01 06:15:20.829776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.829818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.422 [2024-10-01 06:15:20.829834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.422 [2024-10-01 06:15:20.829842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.422 [2024-10-01 06:15:20.829863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.422 [2024-10-01 06:15:20.829915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.422 [2024-10-01 06:15:20.829927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.422 [2024-10-01 06:15:20.829935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.423 [2024-10-01 06:15:20.829945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.423 [2024-10-01 06:15:20.830007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.423 [2024-10-01 06:15:20.830024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.423 [2024-10-01 06:15:20.830032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.423 [2024-10-01 06:15:20.830041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.423 [2024-10-01 06:15:20.830075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.423 [2024-10-01 06:15:20.830087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:55.423 [2024-10-01 06:15:20.830095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.423 [2024-10-01 06:15:20.830104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.423 [2024-10-01 06:15:20.830140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.423 [2024-10-01 06:15:20.830154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.423 [2024-10-01 06:15:20.830165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.423 [2024-10-01 06:15:20.830175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.423 [2024-10-01 06:15:20.830218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.423 [2024-10-01 06:15:20.830238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.423 [2024-10-01 06:15:20.830246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.423 [2024-10-01 06:15:20.830255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.423 [2024-10-01 06:15:20.830388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.250 ms, result 0 00:22:55.423 true 00:22:55.423 06:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88605 00:22:55.423 06:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88605 00:22:55.423 06:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:55.423 [2024-10-01 06:15:20.919986] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:22:55.423 [2024-10-01 06:15:20.920114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89459 ] 00:22:55.683 [2024-10-01 06:15:21.056093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.683 [2024-10-01 06:15:21.109441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.401  Copying: 189/1024 [MB] (189 MBps) Copying: 378/1024 [MB] (189 MBps) Copying: 618/1024 [MB] (239 MBps) Copying: 877/1024 [MB] (259 MBps) Copying: 1024/1024 [MB] (average 223 MBps) 00:23:00.401 00:23:00.401 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88605 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:00.401 06:15:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:00.401 [2024-10-01 06:15:25.963317] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:23:00.401 [2024-10-01 06:15:25.963430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89513 ] 00:23:00.663 [2024-10-01 06:15:26.098231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.663 [2024-10-01 06:15:26.132325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.663 [2024-10-01 06:15:26.213998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.663 [2024-10-01 06:15:26.214046] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.663 [2024-10-01 06:15:26.276307] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:00.663 [2024-10-01 06:15:26.276889] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:00.663 [2024-10-01 06:15:26.277300] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:01.295 [2024-10-01 06:15:26.710226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.710265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:01.295 [2024-10-01 06:15:26.710277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:01.295 [2024-10-01 06:15:26.710287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.710321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.710329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:01.295 [2024-10-01 06:15:26.710337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:01.295 [2024-10-01 06:15:26.710342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.710355] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:01.295 [2024-10-01 06:15:26.710528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:01.295 [2024-10-01 06:15:26.710539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.710544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:01.295 [2024-10-01 06:15:26.710550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:23:01.295 [2024-10-01 06:15:26.710556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.711508] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:01.295 [2024-10-01 06:15:26.713663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.713692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:01.295 [2024-10-01 06:15:26.713700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:23:01.295 [2024-10-01 06:15:26.713706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.713746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.713753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:01.295 [2024-10-01 06:15:26.713760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:01.295 [2024-10-01 06:15:26.713765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.718084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.718107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:01.295 [2024-10-01 06:15:26.718114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:23:01.295 [2024-10-01 06:15:26.718123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.718186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.718193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:01.295 [2024-10-01 06:15:26.718199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:01.295 [2024-10-01 06:15:26.718205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.718240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.718250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:01.295 [2024-10-01 06:15:26.718255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:01.295 [2024-10-01 06:15:26.718261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.718275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:01.295 [2024-10-01 06:15:26.719429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.719450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:01.295 [2024-10-01 06:15:26.719457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:23:01.295 [2024-10-01 06:15:26.719462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.719488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.719497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:01.295 [2024-10-01 06:15:26.719503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:01.295 [2024-10-01 06:15:26.719513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.719526] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:01.295 [2024-10-01 06:15:26.719540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:01.295 [2024-10-01 06:15:26.719575] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:01.295 [2024-10-01 06:15:26.719587] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:01.295 [2024-10-01 06:15:26.719677] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:01.295 [2024-10-01 06:15:26.719685] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:01.295 [2024-10-01 06:15:26.719693] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:01.295 [2024-10-01 06:15:26.719703] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:01.295 [2024-10-01 06:15:26.719710] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:01.295 [2024-10-01 06:15:26.719716] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:01.295 [2024-10-01 06:15:26.719721] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:01.295 [2024-10-01 06:15:26.719727] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:01.295 [2024-10-01 06:15:26.719733] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:01.295 [2024-10-01 06:15:26.719739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.719748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:01.295 [2024-10-01 06:15:26.719756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:23:01.295 [2024-10-01 06:15:26.719765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.295 [2024-10-01 06:15:26.719830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.295 [2024-10-01 06:15:26.719836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:01.296 [2024-10-01 06:15:26.719842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:01.296 [2024-10-01 06:15:26.719858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.719935] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:01.296 [2024-10-01 06:15:26.719945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:01.296 [2024-10-01 06:15:26.719955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.296 [2024-10-01 06:15:26.719961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.296 [2024-10-01 06:15:26.719967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:01.296 [2024-10-01 06:15:26.719972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:01.296 [2024-10-01 06:15:26.719977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:01.296 [2024-10-01 06:15:26.719981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:01.296 [2024-10-01 06:15:26.719987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:01.296 [2024-10-01 06:15:26.719991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.296 [2024-10-01 06:15:26.719996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:01.296 [2024-10-01 06:15:26.720001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:01.296 [2024-10-01 06:15:26.720006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.296 [2024-10-01 06:15:26.720011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:01.296 [2024-10-01 06:15:26.720016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:01.296 [2024-10-01 06:15:26.720025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:01.296 [2024-10-01 06:15:26.720039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:01.296 [2024-10-01 06:15:26.720044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:01.296 [2024-10-01 06:15:26.720054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.296 [2024-10-01 06:15:26.720064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:01.296 [2024-10-01 06:15:26.720069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.296 [2024-10-01 06:15:26.720079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:01.296 [2024-10-01 06:15:26.720084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.296 [2024-10-01 06:15:26.720094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:01.296 [2024-10-01 06:15:26.720099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.296 [2024-10-01 06:15:26.720108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:01.296 [2024-10-01 06:15:26.720113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.296 [2024-10-01 06:15:26.720124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:01.296 [2024-10-01 06:15:26.720129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:01.296 [2024-10-01 06:15:26.720134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.296 [2024-10-01 06:15:26.720139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:01.296 [2024-10-01 06:15:26.720144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:01.296 [2024-10-01 06:15:26.720148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:01.296 [2024-10-01 06:15:26.720158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:01.296 [2024-10-01 06:15:26.720163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720167] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:01.296 [2024-10-01 06:15:26.720173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:01.296 [2024-10-01 06:15:26.720178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.296 [2024-10-01 06:15:26.720185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.296 [2024-10-01 06:15:26.720194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:01.296 [2024-10-01 06:15:26.720199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:01.296 [2024-10-01 06:15:26.720205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:01.296 [2024-10-01 06:15:26.720210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:01.296 [2024-10-01 06:15:26.720215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:01.296 [2024-10-01 06:15:26.720220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:01.296 [2024-10-01 06:15:26.720226] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:01.296 [2024-10-01 06:15:26.720233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.296 [2024-10-01 06:15:26.720239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:01.296 [2024-10-01 06:15:26.720245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:01.296 [2024-10-01 06:15:26.720250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:01.296 [2024-10-01 06:15:26.720255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:01.296 [2024-10-01 06:15:26.720260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:01.296 [2024-10-01 06:15:26.720269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:01.296 [2024-10-01 06:15:26.720274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:01.296 [2024-10-01 06:15:26.720279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:01.296 [2024-10-01 06:15:26.720284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:01.296 [2024-10-01 06:15:26.720289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:01.296 [2024-10-01 06:15:26.720296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:01.296 [2024-10-01 06:15:26.720301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:01.296 [2024-10-01 06:15:26.720306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:01.296 [2024-10-01 06:15:26.720311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:01.296 [2024-10-01 06:15:26.720317] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:01.296 [2024-10-01 06:15:26.720326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.296 [2024-10-01 06:15:26.720332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:01.296 [2024-10-01 06:15:26.720337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:01.296 [2024-10-01 06:15:26.720342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:01.296 [2024-10-01 06:15:26.720347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:01.296 [2024-10-01 06:15:26.720353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.720360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:01.296 [2024-10-01 06:15:26.720366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:23:01.296 [2024-10-01 06:15:26.720371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.743551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.743644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:01.296 [2024-10-01 06:15:26.743679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.137 ms 00:23:01.296 [2024-10-01 06:15:26.743703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.743962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.744013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:01.296 [2024-10-01 06:15:26.744043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:23:01.296 [2024-10-01 06:15:26.744068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.753038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.753062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:01.296 [2024-10-01 06:15:26.753073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.830 ms 00:23:01.296 [2024-10-01 06:15:26.753080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.753102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.753111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:01.296 [2024-10-01 06:15:26.753118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:01.296 [2024-10-01 06:15:26.753124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.753440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.753461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:01.296 [2024-10-01 06:15:26.753468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:23:01.296 [2024-10-01 06:15:26.753479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.753577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.753587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:01.296 [2024-10-01 06:15:26.753598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:01.296 [2024-10-01 06:15:26.753604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.757702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.757724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:01.296 [2024-10-01 06:15:26.757732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:23:01.296 [2024-10-01 06:15:26.757738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.759926] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:01.296 [2024-10-01 06:15:26.759951] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:01.296 [2024-10-01 06:15:26.759960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.759966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:01.296 [2024-10-01 06:15:26.759973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:23:01.296 [2024-10-01 06:15:26.759979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.771286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.771309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:01.296 [2024-10-01 06:15:26.771317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.276 ms 00:23:01.296 [2024-10-01 06:15:26.771324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.772839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.772868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:01.296 [2024-10-01 06:15:26.772874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:23:01.296 [2024-10-01 06:15:26.772879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.774220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.774242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:01.296 [2024-10-01 06:15:26.774249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:23:01.296 [2024-10-01 06:15:26.774255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.774490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.774501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:01.296 [2024-10-01 06:15:26.774513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:23:01.296 [2024-10-01 06:15:26.774518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.789226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.789263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:01.296 [2024-10-01 06:15:26.789273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.695 ms 00:23:01.296 [2024-10-01 06:15:26.789279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.795050] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:01.296 [2024-10-01 06:15:26.797004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.797024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:01.296 [2024-10-01 06:15:26.797032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.690 ms 00:23:01.296 [2024-10-01 06:15:26.797039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.797086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.797094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:01.296 [2024-10-01 06:15:26.797101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:01.296 [2024-10-01 06:15:26.797108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.797182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.797193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:01.296 [2024-10-01 06:15:26.797200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:01.296 [2024-10-01 06:15:26.797209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.797228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.296 [2024-10-01 06:15:26.797235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:01.296 [2024-10-01 06:15:26.797242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:01.296 [2024-10-01 06:15:26.797250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.296 [2024-10-01 06:15:26.797275] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:01.296 [2024-10-01 06:15:26.797285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.297 [2024-10-01 06:15:26.797291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:01.297 [2024-10-01 06:15:26.797297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:01.297 [2024-10-01 06:15:26.797302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.297 [2024-10-01 06:15:26.800372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.297 [2024-10-01 06:15:26.800403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:01.297 [2024-10-01 06:15:26.800411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.057 ms 00:23:01.297 [2024-10-01 06:15:26.800419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.297 [2024-10-01 06:15:26.800473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.297 [2024-10-01 06:15:26.800483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:01.297 [2024-10-01 06:15:26.800489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:01.297 [2024-10-01 06:15:26.800495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.297 [2024-10-01 06:15:26.801311] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 90.758 ms, result 0 00:24:21.805  Copying: 18/1024 [MB] (18 MBps) Copying: 36/1024 [MB] (17 MBps) Copying: 52/1024 [MB] (16 MBps) Copying: 72/1024 [MB] (19 MBps) Copying: 88/1024 [MB] (15 MBps) Copying: 104/1024 [MB] (15 MBps) Copying: 122/1024 [MB] (18 MBps) Copying: 137/1024 [MB] (15 MBps) Copying: 148/1024 [MB] (10 MBps) Copying: 165/1024 [MB] (16 MBps) Copying: 180/1024 [MB] (15 MBps) Copying: 195240/1048576 [kB] (10032 kBps) Copying: 203/1024 [MB] (13 MBps) Copying: 214/1024 [MB] (10 MBps) Copying: 225/1024 [MB] (11 MBps) Copying: 240564/1048576 [kB] (9264 kBps) Copying: 245/1024 [MB] (10 MBps) Copying: 257/1024 [MB] (12 MBps) Copying: 268/1024 [MB] (10 MBps) Copying: 283/1024 [MB] (15 MBps) Copying: 295/1024 [MB] (11 MBps) Copying: 306/1024 [MB] (10 MBps) Copying: 318/1024 [MB] (12 MBps) Copying: 329/1024 [MB] (10 MBps) Copying: 344192/1048576 [kB] (6792 kBps) Copying: 349512/1048576 [kB] (5320 kBps) Copying: 358576/1048576 [kB] (9064 kBps) Copying: 367868/1048576 [kB] (9292 kBps) Copying: 377136/1048576 [kB] (9268 kBps) Copying: 385652/1048576 [kB] (8516 kBps) Copying: 394260/1048576 [kB] (8608 kBps) Copying: 403280/1048576 [kB] (9020 kBps) Copying: 412744/1048576 [kB] (9464 kBps) Copying: 422860/1048576 [kB] (10116 kBps) Copying: 432760/1048576 [kB] (9900 kBps) Copying: 441976/1048576 [kB] (9216 kBps) Copying: 451416/1048576 [kB] (9440 kBps) Copying: 451/1024 [MB] (10 MBps) Copying: 471752/1048576 [kB] (9512 kBps) Copying: 481860/1048576 [kB] (10108 kBps) Copying: 491480/1048576 [kB] (9620 kBps) Copying: 500592/1048576 [kB] (9112 kBps) Copying: 509680/1048576 [kB] (9088 kBps) Copying: 519188/1048576 [kB] (9508 kBps) Copying: 528752/1048576 [kB] (9564 kBps) Copying: 537840/1048576 [kB] (9088 kBps) Copying: 547128/1048576 [kB] (9288 kBps) Copying: 556216/1048576 [kB] (9088 kBps) Copying: 565752/1048576 [kB] (9536 kBps) Copying: 575360/1048576 [kB] (9608 kBps) Copying: 585176/1048576 [kB] (9816 kBps) Copying: 585/1024 [MB] (13 MBps) Copying: 608616/1048576 [kB] (9388 kBps) Copying: 610/1024 [MB] (15 MBps) Copying: 627/1024 [MB] (16 MBps) Copying: 641/1024 [MB] (14 MBps) Copying: 653/1024 [MB] (11 MBps) Copying: 664/1024 [MB] (10 MBps) Copying: 689848/1048576 [kB] (9904 kBps) Copying: 692/1024 [MB] (19 MBps) Copying: 707/1024 [MB] (15 MBps) Copying: 726/1024 [MB] (18 MBps) Copying: 747/1024 [MB] (20 MBps) Copying: 770/1024 [MB] (22 MBps) Copying: 781/1024 [MB] (11 MBps) Copying: 803/1024 [MB] (21 MBps) Copying: 818/1024 [MB] (14 MBps) Copying: 834/1024 [MB] (16 MBps) Copying: 845/1024 [MB] (10 MBps) Copying: 863/1024 [MB] (18 MBps) Copying: 889/1024 [MB] (26 MBps) Copying: 915/1024 [MB] (25 MBps) Copying: 936/1024 [MB] (20 MBps) Copying: 952/1024 [MB] (16 MBps) Copying: 975/1024 [MB] (23 MBps) Copying: 989/1024 [MB] (13 MBps) Copying: 999/1024 [MB] (10 MBps) Copying: 1009/1024 [MB] (10 MBps) Copying: 1042928/1048576 [kB] (8760 kBps) Copying: 1048084/1048576 [kB] (5156 kBps) Copying: 1024/1024 [MB] (average 12 MBps)[2024-10-01 06:16:47.320682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.805 [2024-10-01 06:16:47.320764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:21.805 [2024-10-01 06:16:47.320782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:21.805 [2024-10-01 06:16:47.320790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.805 [2024-10-01 06:16:47.324180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:21.805 [2024-10-01 06:16:47.329260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.805 [2024-10-01 06:16:47.329345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:21.805 [2024-10-01 06:16:47.329370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.019 ms 00:24:21.805 [2024-10-01 06:16:47.329386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.805 [2024-10-01 06:16:47.345841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.805 [2024-10-01 06:16:47.345959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:21.805 [2024-10-01 06:16:47.345975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.626 ms 00:24:21.805 [2024-10-01 06:16:47.345985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.805 [2024-10-01 06:16:47.371352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.805 [2024-10-01 06:16:47.371487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:21.805 [2024-10-01 06:16:47.371508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.341 ms 00:24:21.805 [2024-10-01 06:16:47.371517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.805 [2024-10-01 06:16:47.377742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.805 [2024-10-01 06:16:47.377799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:21.805 [2024-10-01 06:16:47.377812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:24:21.805 [2024-10-01 06:16:47.377822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.806 [2024-10-01 06:16:47.380353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.806 [2024-10-01 06:16:47.380404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:21.806 [2024-10-01 06:16:47.380418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:24:21.806 [2024-10-01 06:16:47.380426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.806 [2024-10-01 06:16:47.383749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.806 [2024-10-01 06:16:47.383795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:21.806 [2024-10-01 06:16:47.383818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:24:21.806 [2024-10-01 06:16:47.383827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.067 [2024-10-01 06:16:47.606683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.067 [2024-10-01 06:16:47.606807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:22.067 [2024-10-01 06:16:47.606823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 222.813 ms 00:24:22.067 [2024-10-01 06:16:47.606832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.067 [2024-10-01 06:16:47.611010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.067 [2024-10-01 06:16:47.611066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:22.067 [2024-10-01 06:16:47.611080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:24:22.067 [2024-10-01 06:16:47.611089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.067 [2024-10-01 06:16:47.614150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.067 [2024-10-01 06:16:47.614194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:22.067 [2024-10-01 06:16:47.614206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:24:22.067 [2024-10-01 06:16:47.614214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.067 [2024-10-01 06:16:47.616277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.067 [2024-10-01 06:16:47.616311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:22.067 [2024-10-01 06:16:47.616321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:24:22.067 [2024-10-01 06:16:47.616329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.067 [2024-10-01 06:16:47.618218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.067 [2024-10-01 06:16:47.618251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:22.067 [2024-10-01 06:16:47.618261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:24:22.067 [2024-10-01 06:16:47.618268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.067 [2024-10-01 06:16:47.618298] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:22.067 [2024-10-01 06:16:47.618316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 95744 / 261120 wr_cnt: 1 state: open 00:24:22.067 [2024-10-01 06:16:47.618328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:22.067 [2024-10-01 06:16:47.618572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.618998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:22.068 [2024-10-01 06:16:47.619188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:22.068 [2024-10-01 06:16:47.619199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 329e6eb1-d2e9-4e44-aeda-5abbabe02bd5 00:24:22.068 [2024-10-01 06:16:47.619223] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 95744 00:24:22.068 [2024-10-01 06:16:47.619231] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 96704 00:24:22.068 [2024-10-01 06:16:47.619239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 95744 00:24:22.068 [2024-10-01 06:16:47.619247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0100 00:24:22.068 [2024-10-01 06:16:47.619255] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:22.068 [2024-10-01 06:16:47.619264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:22.068 [2024-10-01 06:16:47.619271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:22.068 [2024-10-01 06:16:47.619278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:22.068 [2024-10-01 06:16:47.619285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:22.068 [2024-10-01 06:16:47.619293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.068 [2024-10-01 06:16:47.619301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:22.068 [2024-10-01 06:16:47.619311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:24:22.068 [2024-10-01 06:16:47.619318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.068 [2024-10-01 06:16:47.621245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.068 [2024-10-01 06:16:47.621268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:22.068 [2024-10-01 06:16:47.621279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:24:22.068 [2024-10-01 06:16:47.621288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.068 [2024-10-01 06:16:47.621415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.068 [2024-10-01 06:16:47.621426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:22.068 [2024-10-01 06:16:47.621442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:22.068 [2024-10-01 06:16:47.621451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.068 [2024-10-01 06:16:47.627101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.068 [2024-10-01 06:16:47.627150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.068 [2024-10-01 06:16:47.627168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.627177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.627253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.627262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.069 [2024-10-01 06:16:47.627274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.627282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.627325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.627335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.069 [2024-10-01 06:16:47.627343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.627351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.627366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.627375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.069 [2024-10-01 06:16:47.627383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.627393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.639769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.639832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.069 [2024-10-01 06:16:47.639856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.639865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.649423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.649487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.069 [2024-10-01 06:16:47.649510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.649518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.649594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.649604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.069 [2024-10-01 06:16:47.649613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.649621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.649646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.649656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.069 [2024-10-01 06:16:47.649663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.649671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.649740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.649750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.069 [2024-10-01 06:16:47.649758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.649766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.649794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.649807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:22.069 [2024-10-01 06:16:47.649815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.649822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.649881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.649892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.069 [2024-10-01 06:16:47.649900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.649908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.649957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.069 [2024-10-01 06:16:47.649967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.069 [2024-10-01 06:16:47.649975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.069 [2024-10-01 06:16:47.649982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.069 [2024-10-01 06:16:47.650127] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.578 ms, result 0 00:24:23.507 00:24:23.507 00:24:23.507 06:16:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:24.891 06:16:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:24.891 [2024-10-01 06:16:50.459394] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:24:24.891 [2024-10-01 06:16:50.459525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90359 ] 00:24:25.151 [2024-10-01 06:16:50.593166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.151 [2024-10-01 06:16:50.627223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.151 [2024-10-01 06:16:50.733773] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.151 [2024-10-01 06:16:50.733875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.413 [2024-10-01 06:16:50.899125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.413 [2024-10-01 06:16:50.899195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:25.413 [2024-10-01 06:16:50.899215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:25.413 [2024-10-01 06:16:50.899225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.413 [2024-10-01 06:16:50.899294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.413 [2024-10-01 06:16:50.899305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.413 [2024-10-01 06:16:50.899314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:25.413 [2024-10-01 06:16:50.899329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.413 [2024-10-01 06:16:50.899351] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:25.413 [2024-10-01 06:16:50.899662] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:25.413 [2024-10-01 06:16:50.899679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.413 [2024-10-01 06:16:50.899687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.413 [2024-10-01 06:16:50.899699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:24:25.414 [2024-10-01 06:16:50.899708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.901379] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:25.414 [2024-10-01 06:16:50.904688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.904730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:25.414 [2024-10-01 06:16:50.904751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.311 ms 00:24:25.414 [2024-10-01 06:16:50.904761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.904841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.904870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:25.414 [2024-10-01 06:16:50.904880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:25.414 [2024-10-01 06:16:50.904890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.911737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.911791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.414 [2024-10-01 06:16:50.911804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:24:25.414 [2024-10-01 06:16:50.911814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.911957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.911968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.414 [2024-10-01 06:16:50.911977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:24:25.414 [2024-10-01 06:16:50.911986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.912069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.912085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:25.414 [2024-10-01 06:16:50.912094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:25.414 [2024-10-01 06:16:50.912106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.912136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:25.414 [2024-10-01 06:16:50.913922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.913959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.414 [2024-10-01 06:16:50.913970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:24:25.414 [2024-10-01 06:16:50.913984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.914024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.914036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:25.414 [2024-10-01 06:16:50.914045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:25.414 [2024-10-01 06:16:50.914052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.914091] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:25.414 [2024-10-01 06:16:50.914117] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:25.414 [2024-10-01 06:16:50.914163] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:25.414 [2024-10-01 06:16:50.914182] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:25.414 [2024-10-01 06:16:50.914292] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:25.414 [2024-10-01 06:16:50.914308] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:25.414 [2024-10-01 06:16:50.914323] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:25.414 [2024-10-01 06:16:50.914333] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914345] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914354] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:25.414 [2024-10-01 06:16:50.914363] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:25.414 [2024-10-01 06:16:50.914374] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:25.414 [2024-10-01 06:16:50.914385] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:25.414 [2024-10-01 06:16:50.914394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.914402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:25.414 [2024-10-01 06:16:50.914409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:24:25.414 [2024-10-01 06:16:50.914416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.914500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.414 [2024-10-01 06:16:50.914508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:25.414 [2024-10-01 06:16:50.914518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:25.414 [2024-10-01 06:16:50.914525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.414 [2024-10-01 06:16:50.914636] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:25.414 [2024-10-01 06:16:50.914654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:25.414 [2024-10-01 06:16:50.914664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:25.414 [2024-10-01 06:16:50.914691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:25.414 [2024-10-01 06:16:50.914716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.414 [2024-10-01 06:16:50.914736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:25.414 [2024-10-01 06:16:50.914744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:25.414 [2024-10-01 06:16:50.914752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.414 [2024-10-01 06:16:50.914760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:25.414 [2024-10-01 06:16:50.914768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:25.414 [2024-10-01 06:16:50.914776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:25.414 [2024-10-01 06:16:50.914797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:25.414 [2024-10-01 06:16:50.914821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:25.414 [2024-10-01 06:16:50.914891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:25.414 [2024-10-01 06:16:50.914916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:25.414 [2024-10-01 06:16:50.914940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.414 [2024-10-01 06:16:50.914955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:25.414 [2024-10-01 06:16:50.914965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:25.414 [2024-10-01 06:16:50.914973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.414 [2024-10-01 06:16:50.914982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:25.414 [2024-10-01 06:16:50.914990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:25.414 [2024-10-01 06:16:50.914999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.414 [2024-10-01 06:16:50.915007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:25.414 [2024-10-01 06:16:50.915015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:25.414 [2024-10-01 06:16:50.915023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.414 [2024-10-01 06:16:50.915031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:25.414 [2024-10-01 06:16:50.915039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:25.414 [2024-10-01 06:16:50.915047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.414 [2024-10-01 06:16:50.915055] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:25.414 [2024-10-01 06:16:50.915065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:25.414 [2024-10-01 06:16:50.915074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.414 [2024-10-01 06:16:50.915084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.414 [2024-10-01 06:16:50.915094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:25.414 [2024-10-01 06:16:50.915104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:25.414 [2024-10-01 06:16:50.915111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:25.414 [2024-10-01 06:16:50.915119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:25.414 [2024-10-01 06:16:50.915127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:25.414 [2024-10-01 06:16:50.915135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:25.415 [2024-10-01 06:16:50.915144] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:25.415 [2024-10-01 06:16:50.915158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.415 [2024-10-01 06:16:50.915169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:25.415 [2024-10-01 06:16:50.915178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:25.415 [2024-10-01 06:16:50.915186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:25.415 [2024-10-01 06:16:50.915194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:25.415 [2024-10-01 06:16:50.915203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:25.415 [2024-10-01 06:16:50.915212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:25.415 [2024-10-01 06:16:50.915221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:25.415 [2024-10-01 06:16:50.915229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:25.415 [2024-10-01 06:16:50.915238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:25.415 [2024-10-01 06:16:50.915255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:25.415 [2024-10-01 06:16:50.915264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:25.415 [2024-10-01 06:16:50.915273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:25.415 [2024-10-01 06:16:50.915282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:25.415 [2024-10-01 06:16:50.915291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:25.415 [2024-10-01 06:16:50.915299] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:25.415 [2024-10-01 06:16:50.915309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.415 [2024-10-01 06:16:50.915319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:25.415 [2024-10-01 06:16:50.915327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:25.415 [2024-10-01 06:16:50.915336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:25.415 [2024-10-01 06:16:50.915344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:25.415 [2024-10-01 06:16:50.915353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.915361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:25.415 [2024-10-01 06:16:50.915373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:24:25.415 [2024-10-01 06:16:50.915382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.935213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.935292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.415 [2024-10-01 06:16:50.935313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.779 ms 00:24:25.415 [2024-10-01 06:16:50.935329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.935481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.935496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:25.415 [2024-10-01 06:16:50.935514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:25.415 [2024-10-01 06:16:50.935527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.947210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.947266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.415 [2024-10-01 06:16:50.947285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.573 ms 00:24:25.415 [2024-10-01 06:16:50.947293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.947350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.947367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.415 [2024-10-01 06:16:50.947376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:25.415 [2024-10-01 06:16:50.947384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.947861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.947890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.415 [2024-10-01 06:16:50.947901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:24:25.415 [2024-10-01 06:16:50.947914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.948067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.948083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.415 [2024-10-01 06:16:50.948092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:24:25.415 [2024-10-01 06:16:50.948101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.953956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.954000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.415 [2024-10-01 06:16:50.954019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.833 ms 00:24:25.415 [2024-10-01 06:16:50.954028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.957177] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:25.415 [2024-10-01 06:16:50.957219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:25.415 [2024-10-01 06:16:50.957234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.957244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:25.415 [2024-10-01 06:16:50.957255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.094 ms 00:24:25.415 [2024-10-01 06:16:50.957264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.972218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.972287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:25.415 [2024-10-01 06:16:50.972313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.902 ms 00:24:25.415 [2024-10-01 06:16:50.972322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.975761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.975812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:25.415 [2024-10-01 06:16:50.975824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:24:25.415 [2024-10-01 06:16:50.975832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.977890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.977933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:25.415 [2024-10-01 06:16:50.977945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:24:25.415 [2024-10-01 06:16:50.977954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.978311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.978328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:25.415 [2024-10-01 06:16:50.978338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:24:25.415 [2024-10-01 06:16:50.978346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:50.999575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:50.999655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:25.415 [2024-10-01 06:16:50.999671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.200 ms 00:24:25.415 [2024-10-01 06:16:50.999682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:51.008588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:25.415 [2024-10-01 06:16:51.012250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:51.012307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:25.415 [2024-10-01 06:16:51.012329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.497 ms 00:24:25.415 [2024-10-01 06:16:51.012339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:51.012457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:51.012475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:25.415 [2024-10-01 06:16:51.012485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:25.415 [2024-10-01 06:16:51.012493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:51.014124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:51.014169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:25.415 [2024-10-01 06:16:51.014183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:24:25.415 [2024-10-01 06:16:51.014194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:51.014234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:51.014253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:25.415 [2024-10-01 06:16:51.014262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:25.415 [2024-10-01 06:16:51.014270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:51.014309] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:25.415 [2024-10-01 06:16:51.014321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.415 [2024-10-01 06:16:51.014329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:25.415 [2024-10-01 06:16:51.014338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:25.415 [2024-10-01 06:16:51.014346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.415 [2024-10-01 06:16:51.019383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.416 [2024-10-01 06:16:51.019436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:25.416 [2024-10-01 06:16:51.019449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.006 ms 00:24:25.416 [2024-10-01 06:16:51.019457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.416 [2024-10-01 06:16:51.019535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.416 [2024-10-01 06:16:51.019545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:25.416 [2024-10-01 06:16:51.019554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:25.416 [2024-10-01 06:16:51.019562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.416 [2024-10-01 06:16:51.020645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.053 ms, result 0 00:25:29.521  Copying: 1012/1048576 [kB] (1012 kBps) Copying: 3544/1048576 [kB] (2532 kBps) Copying: 13780/1048576 [kB] (10236 kBps) Copying: 29/1024 [MB] (16 MBps) Copying: 46/1024 [MB] (16 MBps) Copying: 59/1024 [MB] (12 MBps) Copying: 76/1024 [MB] (17 MBps) Copying: 102/1024 [MB] (25 MBps) Copying: 124/1024 [MB] (22 MBps) Copying: 140/1024 [MB] (15 MBps) Copying: 155/1024 [MB] (14 MBps) Copying: 170/1024 [MB] (14 MBps) Copying: 184/1024 [MB] (14 MBps) Copying: 202/1024 [MB] (17 MBps) Copying: 219/1024 [MB] (17 MBps) Copying: 234/1024 [MB] (14 MBps) Copying: 248/1024 [MB] (14 MBps) Copying: 262/1024 [MB] (14 MBps) Copying: 276/1024 [MB] (13 MBps) Copying: 294/1024 [MB] (17 MBps) Copying: 312/1024 [MB] (18 MBps) Copying: 331/1024 [MB] (18 MBps) Copying: 347/1024 [MB] (15 MBps) Copying: 364/1024 [MB] (17 MBps) Copying: 380/1024 [MB] (15 MBps) Copying: 396/1024 [MB] (15 MBps) Copying: 411/1024 [MB] (14 MBps) Copying: 428/1024 [MB] (17 MBps) Copying: 445/1024 [MB] (16 MBps) Copying: 463/1024 [MB] (17 MBps) Copying: 479/1024 [MB] (16 MBps) Copying: 497/1024 [MB] (18 MBps) Copying: 513/1024 [MB] (15 MBps) Copying: 528/1024 [MB] (14 MBps) Copying: 543/1024 [MB] (15 MBps) Copying: 558/1024 [MB] (14 MBps) Copying: 578/1024 [MB] (19 MBps) Copying: 600/1024 [MB] (22 MBps) Copying: 616/1024 [MB] (16 MBps) Copying: 630/1024 [MB] (14 MBps) Copying: 644/1024 [MB] (13 MBps) Copying: 657/1024 [MB] (13 MBps) Copying: 671/1024 [MB] (13 MBps) Copying: 688/1024 [MB] (16 MBps) Copying: 714/1024 [MB] (26 MBps) Copying: 733/1024 [MB] (19 MBps) Copying: 752/1024 [MB] (18 MBps) Copying: 771/1024 [MB] (18 MBps) Copying: 797/1024 [MB] (25 MBps) Copying: 812/1024 [MB] (15 MBps) Copying: 826/1024 [MB] (13 MBps) Copying: 840/1024 [MB] (13 MBps) Copying: 855/1024 [MB] (14 MBps) Copying: 875/1024 [MB] (19 MBps) Copying: 891/1024 [MB] (16 MBps) Copying: 906/1024 [MB] (15 MBps) Copying: 923/1024 [MB] (16 MBps) Copying: 939/1024 [MB] (15 MBps) Copying: 954/1024 [MB] (15 MBps) Copying: 969/1024 [MB] (15 MBps) Copying: 984/1024 [MB] (14 MBps) Copying: 1000/1024 [MB] (15 MBps) Copying: 1015/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-01 06:17:55.034758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.034837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:29.521 [2024-10-01 06:17:55.034873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:29.521 [2024-10-01 06:17:55.034982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.035015] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.521 [2024-10-01 06:17:55.035631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.035662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:29.521 [2024-10-01 06:17:55.035683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:25:29.521 [2024-10-01 06:17:55.035694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.036168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.036197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:29.521 [2024-10-01 06:17:55.036211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:25:29.521 [2024-10-01 06:17:55.036223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.050515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.050566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:29.521 [2024-10-01 06:17:55.050578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.268 ms 00:25:29.521 [2024-10-01 06:17:55.050586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.056917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.056953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:29.521 [2024-10-01 06:17:55.056967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.286 ms 00:25:29.521 [2024-10-01 06:17:55.056976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.059544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.059597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:29.521 [2024-10-01 06:17:55.059608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:25:29.521 [2024-10-01 06:17:55.059616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.063704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.063739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:29.521 [2024-10-01 06:17:55.063751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.056 ms 00:25:29.521 [2024-10-01 06:17:55.063767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.069200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.069237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.521 [2024-10-01 06:17:55.069248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.401 ms 00:25:29.521 [2024-10-01 06:17:55.069256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.071929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.071959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:29.521 [2024-10-01 06:17:55.071969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:25:29.521 [2024-10-01 06:17:55.071978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.074042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.074070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:29.521 [2024-10-01 06:17:55.074079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:25:29.521 [2024-10-01 06:17:55.074086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.075883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.075910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:29.521 [2024-10-01 06:17:55.075918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:25:29.521 [2024-10-01 06:17:55.075925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.077735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.521 [2024-10-01 06:17:55.077765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:29.521 [2024-10-01 06:17:55.077774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:25:29.521 [2024-10-01 06:17:55.077780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.521 [2024-10-01 06:17:55.077807] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:29.521 [2024-10-01 06:17:55.077822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:29.521 [2024-10-01 06:17:55.077833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:29.521 [2024-10-01 06:17:55.077842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:29.521 [2024-10-01 06:17:55.077926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.077997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:29.522 [2024-10-01 06:17:55.078622] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:29.523 [2024-10-01 06:17:55.078635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 329e6eb1-d2e9-4e44-aeda-5abbabe02bd5 00:25:29.523 [2024-10-01 06:17:55.078643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:29.523 [2024-10-01 06:17:55.078657] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 168896 00:25:29.523 [2024-10-01 06:17:55.078664] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 166912 00:25:29.523 [2024-10-01 06:17:55.078672] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:25:29.523 [2024-10-01 06:17:55.078681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:29.523 [2024-10-01 06:17:55.078689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:29.523 [2024-10-01 06:17:55.078697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:29.523 [2024-10-01 06:17:55.078703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:29.523 [2024-10-01 06:17:55.078709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:29.523 [2024-10-01 06:17:55.078717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.523 [2024-10-01 06:17:55.078725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:29.523 [2024-10-01 06:17:55.078733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:25:29.523 [2024-10-01 06:17:55.078741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.080485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.523 [2024-10-01 06:17:55.080509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:29.523 [2024-10-01 06:17:55.080519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:25:29.523 [2024-10-01 06:17:55.080528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.080625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.523 [2024-10-01 06:17:55.080633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:29.523 [2024-10-01 06:17:55.080642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:29.523 [2024-10-01 06:17:55.080651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.086064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.086103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.523 [2024-10-01 06:17:55.086113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.086121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.086183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.086192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.523 [2024-10-01 06:17:55.086200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.086208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.086250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.086259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.523 [2024-10-01 06:17:55.086267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.086279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.086294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.086303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.523 [2024-10-01 06:17:55.086310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.086318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.097545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.097600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.523 [2024-10-01 06:17:55.097612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.097620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.106521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.106581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.523 [2024-10-01 06:17:55.106593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.106601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.106670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.106680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.523 [2024-10-01 06:17:55.106689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.106697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.106724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.106732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.523 [2024-10-01 06:17:55.106747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.106755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.106825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.106838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.523 [2024-10-01 06:17:55.106863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.106872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.106901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.106914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.523 [2024-10-01 06:17:55.106923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.106931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.106975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.106986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.523 [2024-10-01 06:17:55.106995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.107002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.107046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.523 [2024-10-01 06:17:55.107056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.523 [2024-10-01 06:17:55.107064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.523 [2024-10-01 06:17:55.107075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.523 [2024-10-01 06:17:55.107205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.421 ms, result 0 00:25:29.785 00:25:29.785 00:25:29.785 06:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:32.333 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:32.333 06:17:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:32.333 [2024-10-01 06:17:57.568860] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:25:32.333 [2024-10-01 06:17:57.568988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91047 ] 00:25:32.333 [2024-10-01 06:17:57.704581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.333 [2024-10-01 06:17:57.747535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.333 [2024-10-01 06:17:57.850103] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:32.333 [2024-10-01 06:17:57.850177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:32.595 [2024-10-01 06:17:58.010226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.595 [2024-10-01 06:17:58.010274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:32.595 [2024-10-01 06:17:58.010291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:32.595 [2024-10-01 06:17:58.010300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.595 [2024-10-01 06:17:58.010352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.595 [2024-10-01 06:17:58.010365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.595 [2024-10-01 06:17:58.010377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:32.595 [2024-10-01 06:17:58.010393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.595 [2024-10-01 06:17:58.010415] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:32.595 [2024-10-01 06:17:58.010955] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:32.595 [2024-10-01 06:17:58.010994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.011005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.596 [2024-10-01 06:17:58.011015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:25:32.596 [2024-10-01 06:17:58.011023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.012445] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:32.596 [2024-10-01 06:17:58.015223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.015252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:32.596 [2024-10-01 06:17:58.015264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.781 ms 00:25:32.596 [2024-10-01 06:17:58.015272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.015328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.015343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:32.596 [2024-10-01 06:17:58.015352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:32.596 [2024-10-01 06:17:58.015359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.021701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.021726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.596 [2024-10-01 06:17:58.021736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.299 ms 00:25:32.596 [2024-10-01 06:17:58.021744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.021831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.021841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.596 [2024-10-01 06:17:58.021866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:32.596 [2024-10-01 06:17:58.021876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.021918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.021928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:32.596 [2024-10-01 06:17:58.021936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:32.596 [2024-10-01 06:17:58.021951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.021974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:32.596 [2024-10-01 06:17:58.023655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.023678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.596 [2024-10-01 06:17:58.023687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:25:32.596 [2024-10-01 06:17:58.023696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.023725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.023733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:32.596 [2024-10-01 06:17:58.023741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:32.596 [2024-10-01 06:17:58.023749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.023781] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:32.596 [2024-10-01 06:17:58.023802] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:32.596 [2024-10-01 06:17:58.023888] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:32.596 [2024-10-01 06:17:58.023905] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:32.596 [2024-10-01 06:17:58.024011] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:32.596 [2024-10-01 06:17:58.024022] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:32.596 [2024-10-01 06:17:58.024033] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:32.596 [2024-10-01 06:17:58.024044] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024057] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024066] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:32.596 [2024-10-01 06:17:58.024073] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:32.596 [2024-10-01 06:17:58.024081] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:32.596 [2024-10-01 06:17:58.024089] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:32.596 [2024-10-01 06:17:58.024097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.024105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:32.596 [2024-10-01 06:17:58.024112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:25:32.596 [2024-10-01 06:17:58.024122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.024208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.596 [2024-10-01 06:17:58.024217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:32.596 [2024-10-01 06:17:58.024227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:32.596 [2024-10-01 06:17:58.024235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.596 [2024-10-01 06:17:58.024335] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:32.596 [2024-10-01 06:17:58.024346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:32.596 [2024-10-01 06:17:58.024356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:32.596 [2024-10-01 06:17:58.024383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:32.596 [2024-10-01 06:17:58.024409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:32.596 [2024-10-01 06:17:58.024426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:32.596 [2024-10-01 06:17:58.024436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:32.596 [2024-10-01 06:17:58.024444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:32.596 [2024-10-01 06:17:58.024452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:32.596 [2024-10-01 06:17:58.024461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:32.596 [2024-10-01 06:17:58.024468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:32.596 [2024-10-01 06:17:58.024483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:32.596 [2024-10-01 06:17:58.024508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:32.596 [2024-10-01 06:17:58.024532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:32.596 [2024-10-01 06:17:58.024556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:32.596 [2024-10-01 06:17:58.024584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:32.596 [2024-10-01 06:17:58.024607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:32.596 [2024-10-01 06:17:58.024622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:32.596 [2024-10-01 06:17:58.024629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:32.596 [2024-10-01 06:17:58.024636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:32.596 [2024-10-01 06:17:58.024644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:32.596 [2024-10-01 06:17:58.024652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:32.596 [2024-10-01 06:17:58.024660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:32.596 [2024-10-01 06:17:58.024674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:32.596 [2024-10-01 06:17:58.024682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024692] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:32.596 [2024-10-01 06:17:58.024701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:32.596 [2024-10-01 06:17:58.024709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:32.596 [2024-10-01 06:17:58.024721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.596 [2024-10-01 06:17:58.024730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:32.596 [2024-10-01 06:17:58.024737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:32.596 [2024-10-01 06:17:58.024745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:32.596 [2024-10-01 06:17:58.024753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:32.596 [2024-10-01 06:17:58.024761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:32.597 [2024-10-01 06:17:58.024769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:32.597 [2024-10-01 06:17:58.024778] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:32.597 [2024-10-01 06:17:58.024788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:32.597 [2024-10-01 06:17:58.024803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:32.597 [2024-10-01 06:17:58.024812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:32.597 [2024-10-01 06:17:58.024820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:32.597 [2024-10-01 06:17:58.024828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:32.597 [2024-10-01 06:17:58.024838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:32.597 [2024-10-01 06:17:58.024859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:32.597 [2024-10-01 06:17:58.024868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:32.597 [2024-10-01 06:17:58.024877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:32.597 [2024-10-01 06:17:58.024885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:32.597 [2024-10-01 06:17:58.024899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:32.597 [2024-10-01 06:17:58.024907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:32.597 [2024-10-01 06:17:58.024916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:32.597 [2024-10-01 06:17:58.024924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:32.597 [2024-10-01 06:17:58.024933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:32.597 [2024-10-01 06:17:58.024941] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:32.597 [2024-10-01 06:17:58.024950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:32.597 [2024-10-01 06:17:58.024960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:32.597 [2024-10-01 06:17:58.024968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:32.597 [2024-10-01 06:17:58.024977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:32.597 [2024-10-01 06:17:58.024985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:32.597 [2024-10-01 06:17:58.024996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.025005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:32.597 [2024-10-01 06:17:58.025013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:25:32.597 [2024-10-01 06:17:58.025021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.044777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.044819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.597 [2024-10-01 06:17:58.044832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.711 ms 00:25:32.597 [2024-10-01 06:17:58.044840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.044943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.044970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:32.597 [2024-10-01 06:17:58.044980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:32.597 [2024-10-01 06:17:58.044987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.056124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.056158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:32.597 [2024-10-01 06:17:58.056171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.081 ms 00:25:32.597 [2024-10-01 06:17:58.056182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.056218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.056230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:32.597 [2024-10-01 06:17:58.056241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:32.597 [2024-10-01 06:17:58.056258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.056725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.056764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:32.597 [2024-10-01 06:17:58.056783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:25:32.597 [2024-10-01 06:17:58.056795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.056990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.057005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:32.597 [2024-10-01 06:17:58.057018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:25:32.597 [2024-10-01 06:17:58.057030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.062779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.062804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.597 [2024-10-01 06:17:58.062818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.724 ms 00:25:32.597 [2024-10-01 06:17:58.062825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.065576] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:32.597 [2024-10-01 06:17:58.065613] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:32.597 [2024-10-01 06:17:58.065627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.065636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:32.597 [2024-10-01 06:17:58.065645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:25:32.597 [2024-10-01 06:17:58.065653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.080418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.080460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:32.597 [2024-10-01 06:17:58.080478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.724 ms 00:25:32.597 [2024-10-01 06:17:58.080485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.082648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.082675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:32.597 [2024-10-01 06:17:58.082684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:25:32.597 [2024-10-01 06:17:58.082691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.084447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.084473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:32.597 [2024-10-01 06:17:58.084482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:25:32.597 [2024-10-01 06:17:58.084489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.084820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.084839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:32.597 [2024-10-01 06:17:58.084861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:25:32.597 [2024-10-01 06:17:58.084869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.102701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.102750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:32.597 [2024-10-01 06:17:58.102766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.814 ms 00:25:32.597 [2024-10-01 06:17:58.102774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.110296] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:32.597 [2024-10-01 06:17:58.112748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.112778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:32.597 [2024-10-01 06:17:58.112789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.935 ms 00:25:32.597 [2024-10-01 06:17:58.112805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.112911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.112923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:32.597 [2024-10-01 06:17:58.112933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:32.597 [2024-10-01 06:17:58.112946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.113649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.113676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:32.597 [2024-10-01 06:17:58.113686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:25:32.597 [2024-10-01 06:17:58.113697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.113731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.113740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:32.597 [2024-10-01 06:17:58.113748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:32.597 [2024-10-01 06:17:58.113756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.597 [2024-10-01 06:17:58.113792] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:32.597 [2024-10-01 06:17:58.113802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.597 [2024-10-01 06:17:58.113810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:32.597 [2024-10-01 06:17:58.113818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:32.597 [2024-10-01 06:17:58.113828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.598 [2024-10-01 06:17:58.117975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.598 [2024-10-01 06:17:58.118005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:32.598 [2024-10-01 06:17:58.118015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:25:32.598 [2024-10-01 06:17:58.118023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.598 [2024-10-01 06:17:58.118098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.598 [2024-10-01 06:17:58.118108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:32.598 [2024-10-01 06:17:58.118116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:32.598 [2024-10-01 06:17:58.118124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.598 [2024-10-01 06:17:58.119115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.421 ms, result 0 00:26:50.217  Copying: 10/1024 [MB] (10 MBps) Copying: 23/1024 [MB] (12 MBps) Copying: 40/1024 [MB] (16 MBps) Copying: 50956/1048576 [kB] (9444 kBps) Copying: 60836/1048576 [kB] (9880 kBps) Copying: 71020/1048576 [kB] (10184 kBps) Copying: 80660/1048576 [kB] (9640 kBps) Copying: 88/1024 [MB] (10 MBps) Copying: 98/1024 [MB] (10 MBps) Copying: 111464/1048576 [kB] (10164 kBps) Copying: 121288/1048576 [kB] (9824 kBps) Copying: 131464/1048576 [kB] (10176 kBps) Copying: 141056/1048576 [kB] (9592 kBps) Copying: 150600/1048576 [kB] (9544 kBps) Copying: 157/1024 [MB] (10 MBps) Copying: 171/1024 [MB] (14 MBps) Copying: 183/1024 [MB] (12 MBps) Copying: 195/1024 [MB] (12 MBps) Copying: 209/1024 [MB] (13 MBps) Copying: 225/1024 [MB] (16 MBps) Copying: 236/1024 [MB] (10 MBps) Copying: 249/1024 [MB] (12 MBps) Copying: 260/1024 [MB] (11 MBps) Copying: 276740/1048576 [kB] (9872 kBps) Copying: 286704/1048576 [kB] (9964 kBps) Copying: 290/1024 [MB] (10 MBps) Copying: 307216/1048576 [kB] (9632 kBps) Copying: 310/1024 [MB] (10 MBps) Copying: 321/1024 [MB] (11 MBps) Copying: 332/1024 [MB] (10 MBps) Copying: 350392/1048576 [kB] (10224 kBps) Copying: 352/1024 [MB] (10 MBps) Copying: 370928/1048576 [kB] (10104 kBps) Copying: 372/1024 [MB] (10 MBps) Copying: 382/1024 [MB] (10 MBps) Copying: 393/1024 [MB] (11 MBps) Copying: 405/1024 [MB] (11 MBps) Copying: 416/1024 [MB] (11 MBps) Copying: 426/1024 [MB] (10 MBps) Copying: 437/1024 [MB] (10 MBps) Copying: 458216/1048576 [kB] (10168 kBps) Copying: 457/1024 [MB] (10 MBps) Copying: 468/1024 [MB] (11 MBps) Copying: 489984/1048576 [kB] (10060 kBps) Copying: 500172/1048576 [kB] (10188 kBps) Copying: 510352/1048576 [kB] (10180 kBps) Copying: 520584/1048576 [kB] (10232 kBps) Copying: 518/1024 [MB] (10 MBps) Copying: 528/1024 [MB] (10 MBps) Copying: 539/1024 [MB] (10 MBps) Copying: 562168/1048576 [kB] (10200 kBps) Copying: 572208/1048576 [kB] (10040 kBps) Copying: 568/1024 [MB] (10 MBps) Copying: 579/1024 [MB] (11 MBps) Copying: 622/1024 [MB] (42 MBps) Copying: 652/1024 [MB] (30 MBps) Copying: 673/1024 [MB] (20 MBps) Copying: 701/1024 [MB] (28 MBps) Copying: 722/1024 [MB] (20 MBps) Copying: 735/1024 [MB] (12 MBps) Copying: 749/1024 [MB] (14 MBps) Copying: 767/1024 [MB] (18 MBps) Copying: 783/1024 [MB] (16 MBps) Copying: 800/1024 [MB] (16 MBps) Copying: 823/1024 [MB] (23 MBps) Copying: 839/1024 [MB] (16 MBps) Copying: 852/1024 [MB] (12 MBps) Copying: 863/1024 [MB] (10 MBps) Copying: 879/1024 [MB] (15 MBps) Copying: 902/1024 [MB] (23 MBps) Copying: 921/1024 [MB] (19 MBps) Copying: 937/1024 [MB] (15 MBps) Copying: 955/1024 [MB] (18 MBps) Copying: 967/1024 [MB] (12 MBps) Copying: 979/1024 [MB] (11 MBps) Copying: 999/1024 [MB] (20 MBps) Copying: 1017/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 13 MBps)[2024-10-01 06:19:15.665700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.665779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:50.217 [2024-10-01 06:19:15.665799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:50.217 [2024-10-01 06:19:15.665812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.665865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:50.217 [2024-10-01 06:19:15.666493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.666526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:50.217 [2024-10-01 06:19:15.666539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:26:50.217 [2024-10-01 06:19:15.666560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.666889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.666912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:50.217 [2024-10-01 06:19:15.666926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:26:50.217 [2024-10-01 06:19:15.666945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.672544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.672579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:50.217 [2024-10-01 06:19:15.672597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.578 ms 00:26:50.217 [2024-10-01 06:19:15.672609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.680558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.680594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:50.217 [2024-10-01 06:19:15.680604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.926 ms 00:26:50.217 [2024-10-01 06:19:15.680620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.683048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.683084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:50.217 [2024-10-01 06:19:15.683093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.361 ms 00:26:50.217 [2024-10-01 06:19:15.683101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.687623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.687660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:50.217 [2024-10-01 06:19:15.687670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.492 ms 00:26:50.217 [2024-10-01 06:19:15.687685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.691405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.691437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:50.217 [2024-10-01 06:19:15.691447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:26:50.217 [2024-10-01 06:19:15.691461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.693678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.693710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:50.217 [2024-10-01 06:19:15.693719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:26:50.217 [2024-10-01 06:19:15.693727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-10-01 06:19:15.696002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-10-01 06:19:15.696032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:50.217 [2024-10-01 06:19:15.696041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:26:50.218 [2024-10-01 06:19:15.696048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-10-01 06:19:15.697625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-10-01 06:19:15.697656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:50.218 [2024-10-01 06:19:15.697665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:26:50.218 [2024-10-01 06:19:15.697672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-10-01 06:19:15.699418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-10-01 06:19:15.699448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:50.218 [2024-10-01 06:19:15.699457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.694 ms 00:26:50.218 [2024-10-01 06:19:15.699464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-10-01 06:19:15.699491] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:50.218 [2024-10-01 06:19:15.699514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:50.218 [2024-10-01 06:19:15.699529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:50.218 [2024-10-01 06:19:15.699538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.699999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:50.218 [2024-10-01 06:19:15.700148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:50.219 [2024-10-01 06:19:15.700305] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:50.219 [2024-10-01 06:19:15.700314] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 329e6eb1-d2e9-4e44-aeda-5abbabe02bd5 00:26:50.219 [2024-10-01 06:19:15.700323] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:50.219 [2024-10-01 06:19:15.700330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:50.219 [2024-10-01 06:19:15.700337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:50.219 [2024-10-01 06:19:15.700344] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:50.219 [2024-10-01 06:19:15.700352] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:50.219 [2024-10-01 06:19:15.700361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:50.219 [2024-10-01 06:19:15.700372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:50.219 [2024-10-01 06:19:15.700379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:50.219 [2024-10-01 06:19:15.700385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:50.219 [2024-10-01 06:19:15.700392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-10-01 06:19:15.700400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:50.219 [2024-10-01 06:19:15.700417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:26:50.219 [2024-10-01 06:19:15.700426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.702265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-10-01 06:19:15.702295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:50.219 [2024-10-01 06:19:15.702312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:26:50.219 [2024-10-01 06:19:15.702320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.702421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-10-01 06:19:15.702431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:50.219 [2024-10-01 06:19:15.702439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:50.219 [2024-10-01 06:19:15.702446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.707956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.707987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:50.219 [2024-10-01 06:19:15.707997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.708004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.708059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.708069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:50.219 [2024-10-01 06:19:15.708077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.708085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.708121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.708131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:50.219 [2024-10-01 06:19:15.708139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.708146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.708162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.708173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:50.219 [2024-10-01 06:19:15.708181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.708189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.719635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.719693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:50.219 [2024-10-01 06:19:15.719709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.719717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.728823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.728894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:50.219 [2024-10-01 06:19:15.728905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.728914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.728966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.728975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:50.219 [2024-10-01 06:19:15.728985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.728993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.729024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.729032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:50.219 [2024-10-01 06:19:15.729043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.729052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.729125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.729136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:50.219 [2024-10-01 06:19:15.729145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.729153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.729182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.729192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:50.219 [2024-10-01 06:19:15.729200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.729211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.729250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.729259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:50.219 [2024-10-01 06:19:15.729267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.729274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.729317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.219 [2024-10-01 06:19:15.729328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:50.219 [2024-10-01 06:19:15.729343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.219 [2024-10-01 06:19:15.729351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-10-01 06:19:15.729478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.757 ms, result 0 00:26:50.481 00:26:50.481 00:26:50.481 06:19:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:53.029 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88605 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 88605 ']' 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 88605 00:26:53.029 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (88605) - No such process 00:26:53.029 Process with pid 88605 is not found 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 88605 is not found' 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:53.029 Remove shared memory files 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:53.029 06:19:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:53.291 06:19:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:53.291 06:19:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:53.291 06:19:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:53.291 ************************************ 00:26:53.291 END TEST ftl_dirty_shutdown 00:26:53.291 ************************************ 00:26:53.291 00:26:53.291 real 5m12.924s 00:26:53.291 user 5m46.135s 00:26:53.291 sys 0m29.574s 00:26:53.291 06:19:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:53.291 06:19:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:53.291 06:19:18 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:53.291 06:19:18 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:53.291 06:19:18 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.291 06:19:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:53.291 ************************************ 00:26:53.291 START TEST ftl_upgrade_shutdown 00:26:53.291 ************************************ 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:53.291 * Looking for test storage... 00:26:53.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.291 --rc genhtml_branch_coverage=1 00:26:53.291 --rc genhtml_function_coverage=1 00:26:53.291 --rc genhtml_legend=1 00:26:53.291 --rc geninfo_all_blocks=1 00:26:53.291 --rc geninfo_unexecuted_blocks=1 00:26:53.291 00:26:53.291 ' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.291 --rc genhtml_branch_coverage=1 00:26:53.291 --rc genhtml_function_coverage=1 00:26:53.291 --rc genhtml_legend=1 00:26:53.291 --rc geninfo_all_blocks=1 00:26:53.291 --rc geninfo_unexecuted_blocks=1 00:26:53.291 00:26:53.291 ' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.291 --rc genhtml_branch_coverage=1 00:26:53.291 --rc genhtml_function_coverage=1 00:26:53.291 --rc genhtml_legend=1 00:26:53.291 --rc geninfo_all_blocks=1 00:26:53.291 --rc geninfo_unexecuted_blocks=1 00:26:53.291 00:26:53.291 ' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.291 --rc genhtml_branch_coverage=1 00:26:53.291 --rc genhtml_function_coverage=1 00:26:53.291 --rc genhtml_legend=1 00:26:53.291 --rc geninfo_all_blocks=1 00:26:53.291 --rc geninfo_unexecuted_blocks=1 00:26:53.291 00:26:53.291 ' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:53.291 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91936 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91936 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91936 ']' 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:53.292 06:19:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:53.558 [2024-10-01 06:19:18.942953] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:26:53.558 [2024-10-01 06:19:18.943205] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91936 ] 00:26:53.558 [2024-10-01 06:19:19.074936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.558 [2024-10-01 06:19:19.120395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:54.502 06:19:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:54.502 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:54.502 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:54.502 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:54.502 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:54.502 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:54.502 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:54.502 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:54.502 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:54.764 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:54.764 { 00:26:54.764 "name": "basen1", 00:26:54.764 "aliases": [ 00:26:54.764 "bb0c89ca-9022-4fe1-8423-3bd022647894" 00:26:54.764 ], 00:26:54.764 "product_name": "NVMe disk", 00:26:54.764 "block_size": 4096, 00:26:54.764 "num_blocks": 1310720, 00:26:54.764 "uuid": "bb0c89ca-9022-4fe1-8423-3bd022647894", 00:26:54.764 "numa_id": -1, 00:26:54.764 "assigned_rate_limits": { 00:26:54.764 "rw_ios_per_sec": 0, 00:26:54.764 "rw_mbytes_per_sec": 0, 00:26:54.764 "r_mbytes_per_sec": 0, 00:26:54.764 "w_mbytes_per_sec": 0 00:26:54.764 }, 00:26:54.764 "claimed": true, 00:26:54.764 "claim_type": "read_many_write_one", 00:26:54.764 "zoned": false, 00:26:54.764 "supported_io_types": { 00:26:54.764 "read": true, 00:26:54.764 "write": true, 00:26:54.764 "unmap": true, 00:26:54.764 "flush": true, 00:26:54.764 "reset": true, 00:26:54.764 "nvme_admin": true, 00:26:54.764 "nvme_io": true, 00:26:54.764 "nvme_io_md": false, 00:26:54.764 "write_zeroes": true, 00:26:54.764 "zcopy": false, 00:26:54.764 "get_zone_info": false, 00:26:54.764 "zone_management": false, 00:26:54.764 "zone_append": false, 00:26:54.764 "compare": true, 00:26:54.764 "compare_and_write": false, 00:26:54.764 "abort": true, 00:26:54.764 "seek_hole": false, 00:26:54.764 "seek_data": false, 00:26:54.764 "copy": true, 00:26:54.764 "nvme_iov_md": false 00:26:54.764 }, 00:26:54.764 "driver_specific": { 00:26:54.764 "nvme": [ 00:26:54.764 { 00:26:54.764 "pci_address": "0000:00:11.0", 00:26:54.764 "trid": { 00:26:54.764 "trtype": "PCIe", 00:26:54.764 "traddr": "0000:00:11.0" 00:26:54.764 }, 00:26:54.764 "ctrlr_data": { 00:26:54.764 "cntlid": 0, 00:26:54.764 "vendor_id": "0x1b36", 00:26:54.764 "model_number": "QEMU NVMe Ctrl", 00:26:54.764 "serial_number": "12341", 00:26:54.764 "firmware_revision": "8.0.0", 00:26:54.764 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:54.764 "oacs": { 00:26:54.764 "security": 0, 00:26:54.764 "format": 1, 00:26:54.764 "firmware": 0, 00:26:54.764 "ns_manage": 1 00:26:54.764 }, 00:26:54.764 "multi_ctrlr": false, 00:26:54.764 "ana_reporting": false 00:26:54.764 }, 00:26:54.764 "vs": { 00:26:54.764 "nvme_version": "1.4" 00:26:54.764 }, 00:26:54.764 "ns_data": { 00:26:54.764 "id": 1, 00:26:54.764 "can_share": false 00:26:54.764 } 00:26:54.764 } 00:26:54.764 ], 00:26:54.764 "mp_policy": "active_passive" 00:26:54.764 } 00:26:54.764 } 00:26:54.764 ]' 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:54.765 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:55.026 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=71ca2324-d035-4d48-a1fd-604e5a575603 00:26:55.026 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:55.026 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71ca2324-d035-4d48-a1fd-604e5a575603 00:26:55.288 06:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:55.549 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6d19a4a8-7865-4838-b6d5-4164bf11e603 00:26:55.549 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6d19a4a8-7865-4838-b6d5-4164bf11e603 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=78453f32-f87d-4b1f-9d90-344500e89ed2 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 78453f32-f87d-4b1f-9d90-344500e89ed2 ]] 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 78453f32-f87d-4b1f-9d90-344500e89ed2 5120 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=78453f32-f87d-4b1f-9d90-344500e89ed2 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 78453f32-f87d-4b1f-9d90-344500e89ed2 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=78453f32-f87d-4b1f-9d90-344500e89ed2 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78453f32-f87d-4b1f-9d90-344500e89ed2 00:26:55.810 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:55.810 { 00:26:55.810 "name": "78453f32-f87d-4b1f-9d90-344500e89ed2", 00:26:55.810 "aliases": [ 00:26:55.810 "lvs/basen1p0" 00:26:55.810 ], 00:26:55.810 "product_name": "Logical Volume", 00:26:55.810 "block_size": 4096, 00:26:55.810 "num_blocks": 5242880, 00:26:55.810 "uuid": "78453f32-f87d-4b1f-9d90-344500e89ed2", 00:26:55.810 "assigned_rate_limits": { 00:26:55.810 "rw_ios_per_sec": 0, 00:26:55.810 "rw_mbytes_per_sec": 0, 00:26:55.810 "r_mbytes_per_sec": 0, 00:26:55.810 "w_mbytes_per_sec": 0 00:26:55.810 }, 00:26:55.810 "claimed": false, 00:26:55.810 "zoned": false, 00:26:55.810 "supported_io_types": { 00:26:55.810 "read": true, 00:26:55.810 "write": true, 00:26:55.810 "unmap": true, 00:26:55.810 "flush": false, 00:26:55.810 "reset": true, 00:26:55.810 "nvme_admin": false, 00:26:55.810 "nvme_io": false, 00:26:55.810 "nvme_io_md": false, 00:26:55.810 "write_zeroes": true, 00:26:55.810 "zcopy": false, 00:26:55.810 "get_zone_info": false, 00:26:55.810 "zone_management": false, 00:26:55.810 "zone_append": false, 00:26:55.810 "compare": false, 00:26:55.810 "compare_and_write": false, 00:26:55.810 "abort": false, 00:26:55.810 "seek_hole": true, 00:26:55.810 "seek_data": true, 00:26:55.810 "copy": false, 00:26:55.810 "nvme_iov_md": false 00:26:55.810 }, 00:26:55.810 "driver_specific": { 00:26:55.810 "lvol": { 00:26:55.810 "lvol_store_uuid": "6d19a4a8-7865-4838-b6d5-4164bf11e603", 00:26:55.810 "base_bdev": "basen1", 00:26:55.810 "thin_provision": true, 00:26:55.810 "num_allocated_clusters": 0, 00:26:55.810 "snapshot": false, 00:26:55.810 "clone": false, 00:26:55.810 "esnap_clone": false 00:26:55.810 } 00:26:55.810 } 00:26:55.810 } 00:26:55.810 ]' 00:26:56.071 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:56.072 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:56.072 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:56.072 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:56.072 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:56.072 06:19:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:56.072 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:56.072 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:56.072 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:56.333 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:56.333 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:56.333 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:56.699 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:56.700 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:56.700 06:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 78453f32-f87d-4b1f-9d90-344500e89ed2 -c cachen1p0 --l2p_dram_limit 2 00:26:56.700 [2024-10-01 06:19:22.101476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.101548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:56.700 [2024-10-01 06:19:22.101565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:56.700 [2024-10-01 06:19:22.101576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.101641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.101653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:56.700 [2024-10-01 06:19:22.101662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:26:56.700 [2024-10-01 06:19:22.101674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.101696] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:56.700 [2024-10-01 06:19:22.102593] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:56.700 [2024-10-01 06:19:22.102640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.102653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:56.700 [2024-10-01 06:19:22.102666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.949 ms 00:26:56.700 [2024-10-01 06:19:22.102676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.102726] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID db08d075-0ca7-4eb3-884c-ec3dac13a152 00:26:56.700 [2024-10-01 06:19:22.104130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.104162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:56.700 [2024-10-01 06:19:22.104174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:56.700 [2024-10-01 06:19:22.104184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.111313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.111348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:56.700 [2024-10-01 06:19:22.111360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.040 ms 00:26:56.700 [2024-10-01 06:19:22.111368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.111415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.111425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:56.700 [2024-10-01 06:19:22.111436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:56.700 [2024-10-01 06:19:22.111450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.111496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.111506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:56.700 [2024-10-01 06:19:22.111517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:56.700 [2024-10-01 06:19:22.111526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.111550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:56.700 [2024-10-01 06:19:22.113329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.113361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:56.700 [2024-10-01 06:19:22.113374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.788 ms 00:26:56.700 [2024-10-01 06:19:22.113384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.113414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.113425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:56.700 [2024-10-01 06:19:22.113434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:56.700 [2024-10-01 06:19:22.113445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.113470] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:56.700 [2024-10-01 06:19:22.113618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:56.700 [2024-10-01 06:19:22.113636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:56.700 [2024-10-01 06:19:22.113649] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:56.700 [2024-10-01 06:19:22.113660] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:56.700 [2024-10-01 06:19:22.113671] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:56.700 [2024-10-01 06:19:22.113680] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:56.700 [2024-10-01 06:19:22.113696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:56.700 [2024-10-01 06:19:22.113705] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:56.700 [2024-10-01 06:19:22.113714] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:56.700 [2024-10-01 06:19:22.113725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.113736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:56.700 [2024-10-01 06:19:22.113747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:26:56.700 [2024-10-01 06:19:22.113756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.113865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.700 [2024-10-01 06:19:22.113885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:56.700 [2024-10-01 06:19:22.113893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:26:56.700 [2024-10-01 06:19:22.113903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.700 [2024-10-01 06:19:22.113997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:56.700 [2024-10-01 06:19:22.114019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:56.700 [2024-10-01 06:19:22.114028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:56.700 [2024-10-01 06:19:22.114040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:56.700 [2024-10-01 06:19:22.114060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:56.700 [2024-10-01 06:19:22.114078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:56.700 [2024-10-01 06:19:22.114087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:56.700 [2024-10-01 06:19:22.114098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:56.700 [2024-10-01 06:19:22.114117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:56.700 [2024-10-01 06:19:22.114124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:56.700 [2024-10-01 06:19:22.114144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:56.700 [2024-10-01 06:19:22.114153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:56.700 [2024-10-01 06:19:22.114171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:56.700 [2024-10-01 06:19:22.114178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:56.700 [2024-10-01 06:19:22.114197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:56.700 [2024-10-01 06:19:22.114206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:56.700 [2024-10-01 06:19:22.114214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:56.700 [2024-10-01 06:19:22.114223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:56.700 [2024-10-01 06:19:22.114231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:56.700 [2024-10-01 06:19:22.114241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:56.700 [2024-10-01 06:19:22.114249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:56.700 [2024-10-01 06:19:22.114258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:56.700 [2024-10-01 06:19:22.114267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:56.700 [2024-10-01 06:19:22.114277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:56.700 [2024-10-01 06:19:22.114285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:56.700 [2024-10-01 06:19:22.114295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:56.700 [2024-10-01 06:19:22.114302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:56.700 [2024-10-01 06:19:22.114312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:56.700 [2024-10-01 06:19:22.114332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:56.700 [2024-10-01 06:19:22.114340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:56.700 [2024-10-01 06:19:22.114358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:56.700 [2024-10-01 06:19:22.114386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:56.700 [2024-10-01 06:19:22.114393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.700 [2024-10-01 06:19:22.114403] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:56.700 [2024-10-01 06:19:22.114412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:56.701 [2024-10-01 06:19:22.114424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:56.701 [2024-10-01 06:19:22.114433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:56.701 [2024-10-01 06:19:22.114444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:56.701 [2024-10-01 06:19:22.114459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:56.701 [2024-10-01 06:19:22.114472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:56.701 [2024-10-01 06:19:22.114480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:56.701 [2024-10-01 06:19:22.114489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:56.701 [2024-10-01 06:19:22.114498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:56.701 [2024-10-01 06:19:22.114511] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:56.701 [2024-10-01 06:19:22.114522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:56.701 [2024-10-01 06:19:22.114542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:56.701 [2024-10-01 06:19:22.114570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:56.701 [2024-10-01 06:19:22.114578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:56.701 [2024-10-01 06:19:22.114588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:56.701 [2024-10-01 06:19:22.114596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:56.701 [2024-10-01 06:19:22.114653] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:56.701 [2024-10-01 06:19:22.114665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:56.701 [2024-10-01 06:19:22.114686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:56.701 [2024-10-01 06:19:22.114695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:56.701 [2024-10-01 06:19:22.114702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:56.701 [2024-10-01 06:19:22.114712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.701 [2024-10-01 06:19:22.114726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:56.701 [2024-10-01 06:19:22.114738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.780 ms 00:26:56.701 [2024-10-01 06:19:22.114745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.701 [2024-10-01 06:19:22.114794] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:56.701 [2024-10-01 06:19:22.114805] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:00.005 [2024-10-01 06:19:25.264013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.264072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:00.005 [2024-10-01 06:19:25.264093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3149.204 ms 00:27:00.005 [2024-10-01 06:19:25.264102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.274585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.274631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:00.005 [2024-10-01 06:19:25.274648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.355 ms 00:27:00.005 [2024-10-01 06:19:25.274660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.274718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.274734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:00.005 [2024-10-01 06:19:25.274749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:00.005 [2024-10-01 06:19:25.274757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.284620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.284661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:00.005 [2024-10-01 06:19:25.284675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.803 ms 00:27:00.005 [2024-10-01 06:19:25.284684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.284727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.284736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:00.005 [2024-10-01 06:19:25.284750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:00.005 [2024-10-01 06:19:25.284759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.285238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.285262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:00.005 [2024-10-01 06:19:25.285275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.415 ms 00:27:00.005 [2024-10-01 06:19:25.285285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.285336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.285346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:00.005 [2024-10-01 06:19:25.285357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:00.005 [2024-10-01 06:19:25.285374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.301806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.301876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:00.005 [2024-10-01 06:19:25.301899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.399 ms 00:27:00.005 [2024-10-01 06:19:25.301913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.313041] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:00.005 [2024-10-01 06:19:25.314051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.314082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:00.005 [2024-10-01 06:19:25.314092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.991 ms 00:27:00.005 [2024-10-01 06:19:25.314102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.328033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.328072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:00.005 [2024-10-01 06:19:25.328084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.904 ms 00:27:00.005 [2024-10-01 06:19:25.328097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.328185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.328199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:00.005 [2024-10-01 06:19:25.328208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:27:00.005 [2024-10-01 06:19:25.328217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.331050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.331086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:00.005 [2024-10-01 06:19:25.331098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.813 ms 00:27:00.005 [2024-10-01 06:19:25.331110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.333636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.333670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:00.005 [2024-10-01 06:19:25.333680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.488 ms 00:27:00.005 [2024-10-01 06:19:25.333690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.334010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.334040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:00.005 [2024-10-01 06:19:25.334053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:27:00.005 [2024-10-01 06:19:25.334069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.364230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.364274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:00.005 [2024-10-01 06:19:25.364291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.140 ms 00:27:00.005 [2024-10-01 06:19:25.364302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.369029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.369066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:00.005 [2024-10-01 06:19:25.369078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.690 ms 00:27:00.005 [2024-10-01 06:19:25.369089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.373073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.373115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:00.005 [2024-10-01 06:19:25.373126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.946 ms 00:27:00.005 [2024-10-01 06:19:25.373136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.377545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.377580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:00.005 [2024-10-01 06:19:25.377590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.388 ms 00:27:00.005 [2024-10-01 06:19:25.377601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.377631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.005 [2024-10-01 06:19:25.377643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:00.005 [2024-10-01 06:19:25.377652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:00.005 [2024-10-01 06:19:25.377662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.005 [2024-10-01 06:19:25.377728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.006 [2024-10-01 06:19:25.377739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:00.006 [2024-10-01 06:19:25.377749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:00.006 [2024-10-01 06:19:25.377758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.006 [2024-10-01 06:19:25.378701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3276.796 ms, result 0 00:27:00.006 { 00:27:00.006 "name": "ftl", 00:27:00.006 "uuid": "db08d075-0ca7-4eb3-884c-ec3dac13a152" 00:27:00.006 } 00:27:00.006 06:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:00.006 [2024-10-01 06:19:25.587359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.006 06:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:00.267 06:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:00.527 [2024-10-01 06:19:25.999774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:00.527 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:00.805 [2024-10-01 06:19:26.204122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:00.805 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:01.065 Fill FTL, iteration 1 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:01.065 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92058 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92058 /var/tmp/spdk.tgt.sock 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92058 ']' 00:27:01.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:01.066 06:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.066 [2024-10-01 06:19:26.620317] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:27:01.066 [2024-10-01 06:19:26.620435] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92058 ] 00:27:01.326 [2024-10-01 06:19:26.754649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.326 [2024-10-01 06:19:26.797079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.898 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:01.898 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:01.898 06:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:02.160 ftln1 00:27:02.160 06:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:02.160 06:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92058 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92058 ']' 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92058 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92058 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:02.421 killing process with pid 92058 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92058' 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92058 00:27:02.421 06:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92058 00:27:02.991 06:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:02.991 06:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:02.991 [2024-10-01 06:19:28.393365] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:27:02.991 [2024-10-01 06:19:28.393501] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92089 ] 00:27:02.991 [2024-10-01 06:19:28.529243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.991 [2024-10-01 06:19:28.579952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.432  Copying: 164/1024 [MB] (164 MBps) Copying: 373/1024 [MB] (209 MBps) Copying: 593/1024 [MB] (220 MBps) Copying: 812/1024 [MB] (219 MBps) Copying: 1024/1024 [MB] (average 205 MBps) 00:27:08.432 00:27:08.432 Calculate MD5 checksum, iteration 1 00:27:08.432 06:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:08.432 06:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:08.432 06:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:08.432 06:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:08.432 06:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:08.432 06:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:08.432 06:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:08.432 06:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:08.689 [2024-10-01 06:19:34.070038] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:27:08.689 [2024-10-01 06:19:34.070157] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92147 ] 00:27:08.689 [2024-10-01 06:19:34.206564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.689 [2024-10-01 06:19:34.250752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.628  Copying: 643/1024 [MB] (643 MBps) Copying: 1024/1024 [MB] (average 653 MBps) 00:27:10.628 00:27:10.628 06:19:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:10.628 06:19:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:13.155 Fill FTL, iteration 2 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4ae44bb90f685fe8912f95b86e77c4dd 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:13.155 06:19:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:13.155 [2024-10-01 06:19:38.472617] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:27:13.155 [2024-10-01 06:19:38.472764] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92198 ] 00:27:13.155 [2024-10-01 06:19:38.606866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.155 [2024-10-01 06:19:38.641065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.261  Copying: 204/1024 [MB] (204 MBps) Copying: 403/1024 [MB] (199 MBps) Copying: 604/1024 [MB] (201 MBps) Copying: 818/1024 [MB] (214 MBps) Copying: 1024/1024 [MB] (average 211 MBps) 00:27:18.261 00:27:18.519 Calculate MD5 checksum, iteration 2 00:27:18.519 06:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:18.519 06:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:18.519 06:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:18.519 06:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:18.519 06:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:18.519 06:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:18.519 06:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:18.519 06:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:18.519 [2024-10-01 06:19:43.951589] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:27:18.519 [2024-10-01 06:19:43.951713] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92259 ] 00:27:18.519 [2024-10-01 06:19:44.087522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.519 [2024-10-01 06:19:44.128287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.392  Copying: 641/1024 [MB] (641 MBps) Copying: 1024/1024 [MB] (average 659 MBps) 00:27:21.392 00:27:21.392 06:19:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:21.392 06:19:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:23.291 06:19:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:23.291 06:19:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=bf415f388566540a75084d8d4e963933 00:27:23.291 06:19:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:23.291 06:19:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:23.291 06:19:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:23.615 [2024-10-01 06:19:49.084596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.615 [2024-10-01 06:19:49.084657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:23.615 [2024-10-01 06:19:49.084671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:23.615 [2024-10-01 06:19:49.084679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.615 [2024-10-01 06:19:49.084699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.615 [2024-10-01 06:19:49.084707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:23.615 [2024-10-01 06:19:49.084717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:23.615 [2024-10-01 06:19:49.084724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.615 [2024-10-01 06:19:49.084741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.615 [2024-10-01 06:19:49.084748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:23.615 [2024-10-01 06:19:49.084755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:23.615 [2024-10-01 06:19:49.084761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.615 [2024-10-01 06:19:49.084815] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.218 ms, result 0 00:27:23.615 true 00:27:23.615 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:23.887 { 00:27:23.887 "name": "ftl", 00:27:23.887 "properties": [ 00:27:23.887 { 00:27:23.887 "name": "superblock_version", 00:27:23.887 "value": 5, 00:27:23.887 "read-only": true 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "name": "base_device", 00:27:23.887 "bands": [ 00:27:23.887 { 00:27:23.887 "id": 0, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 1, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 2, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 3, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 4, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 5, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 6, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 7, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 8, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 9, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 10, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 11, 00:27:23.887 "state": "FREE", 00:27:23.887 "validity": 0.0 00:27:23.887 }, 00:27:23.887 { 00:27:23.887 "id": 12, 00:27:23.887 "state": "FREE", 00:27:23.888 "validity": 0.0 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 13, 00:27:23.888 "state": "FREE", 00:27:23.888 "validity": 0.0 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 14, 00:27:23.888 "state": "FREE", 00:27:23.888 "validity": 0.0 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 15, 00:27:23.888 "state": "FREE", 00:27:23.888 "validity": 0.0 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 16, 00:27:23.888 "state": "FREE", 00:27:23.888 "validity": 0.0 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 17, 00:27:23.888 "state": "FREE", 00:27:23.888 "validity": 0.0 00:27:23.888 } 00:27:23.888 ], 00:27:23.888 "read-only": true 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "name": "cache_device", 00:27:23.888 "type": "bdev", 00:27:23.888 "chunks": [ 00:27:23.888 { 00:27:23.888 "id": 0, 00:27:23.888 "state": "INACTIVE", 00:27:23.888 "utilization": 0.0 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 1, 00:27:23.888 "state": "CLOSED", 00:27:23.888 "utilization": 1.0 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 2, 00:27:23.888 "state": "CLOSED", 00:27:23.888 "utilization": 1.0 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 3, 00:27:23.888 "state": "OPEN", 00:27:23.888 "utilization": 0.001953125 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "id": 4, 00:27:23.888 "state": "OPEN", 00:27:23.888 "utilization": 0.0 00:27:23.888 } 00:27:23.888 ], 00:27:23.888 "read-only": true 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "name": "verbose_mode", 00:27:23.888 "value": true, 00:27:23.888 "unit": "", 00:27:23.888 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:23.888 }, 00:27:23.888 { 00:27:23.888 "name": "prep_upgrade_on_shutdown", 00:27:23.888 "value": false, 00:27:23.888 "unit": "", 00:27:23.888 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:23.888 } 00:27:23.888 ] 00:27:23.888 } 00:27:23.888 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:23.888 [2024-10-01 06:19:49.452976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.888 [2024-10-01 06:19:49.453044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:23.888 [2024-10-01 06:19:49.453057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:23.888 [2024-10-01 06:19:49.453064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.888 [2024-10-01 06:19:49.453086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.888 [2024-10-01 06:19:49.453109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:23.888 [2024-10-01 06:19:49.453116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:23.888 [2024-10-01 06:19:49.453123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.888 [2024-10-01 06:19:49.453140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.888 [2024-10-01 06:19:49.453147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:23.888 [2024-10-01 06:19:49.453154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:23.888 [2024-10-01 06:19:49.453160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.888 [2024-10-01 06:19:49.453214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.231 ms, result 0 00:27:23.888 true 00:27:23.888 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:23.888 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:23.888 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:24.148 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:24.148 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:24.148 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:24.405 [2024-10-01 06:19:49.837311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.405 [2024-10-01 06:19:49.837369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:24.405 [2024-10-01 06:19:49.837381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:24.405 [2024-10-01 06:19:49.837388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.405 [2024-10-01 06:19:49.837409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.405 [2024-10-01 06:19:49.837415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:24.405 [2024-10-01 06:19:49.837422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:24.405 [2024-10-01 06:19:49.837428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.405 [2024-10-01 06:19:49.837444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.405 [2024-10-01 06:19:49.837451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:24.405 [2024-10-01 06:19:49.837458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:24.405 [2024-10-01 06:19:49.837464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.405 [2024-10-01 06:19:49.837515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.202 ms, result 0 00:27:24.405 true 00:27:24.405 06:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:24.663 { 00:27:24.663 "name": "ftl", 00:27:24.663 "properties": [ 00:27:24.663 { 00:27:24.663 "name": "superblock_version", 00:27:24.663 "value": 5, 00:27:24.663 "read-only": true 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "name": "base_device", 00:27:24.663 "bands": [ 00:27:24.663 { 00:27:24.663 "id": 0, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 1, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 2, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 3, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 4, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 5, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 6, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 7, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 8, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 9, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 10, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 11, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 12, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 13, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 14, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 15, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 16, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 17, 00:27:24.663 "state": "FREE", 00:27:24.663 "validity": 0.0 00:27:24.663 } 00:27:24.663 ], 00:27:24.663 "read-only": true 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "name": "cache_device", 00:27:24.663 "type": "bdev", 00:27:24.663 "chunks": [ 00:27:24.663 { 00:27:24.663 "id": 0, 00:27:24.663 "state": "INACTIVE", 00:27:24.663 "utilization": 0.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 1, 00:27:24.663 "state": "CLOSED", 00:27:24.663 "utilization": 1.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 2, 00:27:24.663 "state": "CLOSED", 00:27:24.663 "utilization": 1.0 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 3, 00:27:24.663 "state": "OPEN", 00:27:24.663 "utilization": 0.001953125 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "id": 4, 00:27:24.663 "state": "OPEN", 00:27:24.663 "utilization": 0.0 00:27:24.663 } 00:27:24.663 ], 00:27:24.663 "read-only": true 00:27:24.663 }, 00:27:24.663 { 00:27:24.663 "name": "verbose_mode", 00:27:24.663 "value": true, 00:27:24.663 "unit": "", 00:27:24.664 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:24.664 }, 00:27:24.664 { 00:27:24.664 "name": "prep_upgrade_on_shutdown", 00:27:24.664 "value": true, 00:27:24.664 "unit": "", 00:27:24.664 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:24.664 } 00:27:24.664 ] 00:27:24.664 } 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91936 ]] 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91936 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91936 ']' 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91936 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91936 00:27:24.664 killing process with pid 91936 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91936' 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91936 00:27:24.664 06:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91936 00:27:24.664 [2024-10-01 06:19:50.186176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:24.664 [2024-10-01 06:19:50.190253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.664 [2024-10-01 06:19:50.190291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:24.664 [2024-10-01 06:19:50.190304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:24.664 [2024-10-01 06:19:50.190311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.664 [2024-10-01 06:19:50.190330] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:24.664 [2024-10-01 06:19:50.190856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.664 [2024-10-01 06:19:50.190882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:24.664 [2024-10-01 06:19:50.190891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:27:24.664 [2024-10-01 06:19:50.190897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.770 [2024-10-01 06:19:58.172037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.770 [2024-10-01 06:19:58.172109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:32.770 [2024-10-01 06:19:58.172125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7981.081 ms 00:27:32.770 [2024-10-01 06:19:58.172138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.770 [2024-10-01 06:19:58.173292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.770 [2024-10-01 06:19:58.173316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:32.770 [2024-10-01 06:19:58.173326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.136 ms 00:27:32.770 [2024-10-01 06:19:58.173334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.770 [2024-10-01 06:19:58.174474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.770 [2024-10-01 06:19:58.174507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:32.770 [2024-10-01 06:19:58.174516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:27:32.770 [2024-10-01 06:19:58.174528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.770 [2024-10-01 06:19:58.175949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.770 [2024-10-01 06:19:58.175983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:32.770 [2024-10-01 06:19:58.175993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.370 ms 00:27:32.770 [2024-10-01 06:19:58.176001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.770 [2024-10-01 06:19:58.178430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.178464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:32.771 [2024-10-01 06:19:58.178474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.397 ms 00:27:32.771 [2024-10-01 06:19:58.178483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.178569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.178579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:32.771 [2024-10-01 06:19:58.178593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:32.771 [2024-10-01 06:19:58.178601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.179751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.179784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:32.771 [2024-10-01 06:19:58.179793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:27:32.771 [2024-10-01 06:19:58.179799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.180731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.180763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:32.771 [2024-10-01 06:19:58.180772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.903 ms 00:27:32.771 [2024-10-01 06:19:58.180779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.181889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.181920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:32.771 [2024-10-01 06:19:58.181929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.083 ms 00:27:32.771 [2024-10-01 06:19:58.181936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.182821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.182864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:32.771 [2024-10-01 06:19:58.182873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.828 ms 00:27:32.771 [2024-10-01 06:19:58.182880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.182910] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:32.771 [2024-10-01 06:19:58.182924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:32.771 [2024-10-01 06:19:58.182934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:32.771 [2024-10-01 06:19:58.182942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:32.771 [2024-10-01 06:19:58.182950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.182959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.182968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.182975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.182982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.182989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.182997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:32.771 [2024-10-01 06:19:58.183066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:32.771 [2024-10-01 06:19:58.183073] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: db08d075-0ca7-4eb3-884c-ec3dac13a152 00:27:32.771 [2024-10-01 06:19:58.183081] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:32.771 [2024-10-01 06:19:58.183088] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:32.771 [2024-10-01 06:19:58.183095] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:32.771 [2024-10-01 06:19:58.183103] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:32.771 [2024-10-01 06:19:58.183110] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:32.771 [2024-10-01 06:19:58.183127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:32.771 [2024-10-01 06:19:58.183135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:32.771 [2024-10-01 06:19:58.183141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:32.771 [2024-10-01 06:19:58.183148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:32.771 [2024-10-01 06:19:58.183155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.183163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:32.771 [2024-10-01 06:19:58.183171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.246 ms 00:27:32.771 [2024-10-01 06:19:58.183179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.184971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.184995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:32.771 [2024-10-01 06:19:58.185005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.777 ms 00:27:32.771 [2024-10-01 06:19:58.185019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.185122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.771 [2024-10-01 06:19:58.185166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:32.771 [2024-10-01 06:19:58.185176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:27:32.771 [2024-10-01 06:19:58.185185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.191642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.191679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:32.771 [2024-10-01 06:19:58.191701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.191708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.191738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.191746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:32.771 [2024-10-01 06:19:58.191755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.191770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.191840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.191869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:32.771 [2024-10-01 06:19:58.191877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.191888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.191907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.191915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:32.771 [2024-10-01 06:19:58.191923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.191931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.203468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.203510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:32.771 [2024-10-01 06:19:58.203521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.203534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.212603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.212648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:32.771 [2024-10-01 06:19:58.212660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.212676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.212749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.212759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:32.771 [2024-10-01 06:19:58.212767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.212775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.212815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.212824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:32.771 [2024-10-01 06:19:58.212833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.212840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.212921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.212932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:32.771 [2024-10-01 06:19:58.212940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.771 [2024-10-01 06:19:58.212948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.771 [2024-10-01 06:19:58.212981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.771 [2024-10-01 06:19:58.212993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:32.772 [2024-10-01 06:19:58.213002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.772 [2024-10-01 06:19:58.213010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.772 [2024-10-01 06:19:58.213052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.772 [2024-10-01 06:19:58.213062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:32.772 [2024-10-01 06:19:58.213070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.772 [2024-10-01 06:19:58.213078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.772 [2024-10-01 06:19:58.213137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:32.772 [2024-10-01 06:19:58.213147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:32.772 [2024-10-01 06:19:58.213155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:32.772 [2024-10-01 06:19:58.213163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.772 [2024-10-01 06:19:58.213292] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8022.977 ms, result 0 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92441 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92441 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92441 ']' 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:42.855 06:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:42.855 [2024-10-01 06:20:07.628354] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:27:42.855 [2024-10-01 06:20:07.628477] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92441 ] 00:27:42.855 [2024-10-01 06:20:07.766086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.855 [2024-10-01 06:20:07.810683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.855 [2024-10-01 06:20:08.123858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:42.855 [2024-10-01 06:20:08.123935] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:42.855 [2024-10-01 06:20:08.385039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.855 [2024-10-01 06:20:08.385097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:42.855 [2024-10-01 06:20:08.385112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:42.855 [2024-10-01 06:20:08.385124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.385187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.385197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:42.856 [2024-10-01 06:20:08.385208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:42.856 [2024-10-01 06:20:08.385215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.385240] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:42.856 [2024-10-01 06:20:08.385499] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:42.856 [2024-10-01 06:20:08.385513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.385524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:42.856 [2024-10-01 06:20:08.385533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:27:42.856 [2024-10-01 06:20:08.385540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.386862] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:42.856 [2024-10-01 06:20:08.390318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.390355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:42.856 [2024-10-01 06:20:08.390368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.458 ms 00:27:42.856 [2024-10-01 06:20:08.390381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.390448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.390458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:42.856 [2024-10-01 06:20:08.390468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:42.856 [2024-10-01 06:20:08.390476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.396756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.396784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:42.856 [2024-10-01 06:20:08.396798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.225 ms 00:27:42.856 [2024-10-01 06:20:08.396807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.396872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.396882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:42.856 [2024-10-01 06:20:08.396891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:42.856 [2024-10-01 06:20:08.396899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.396952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.396962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:42.856 [2024-10-01 06:20:08.396971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:42.856 [2024-10-01 06:20:08.396983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.397011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:42.856 [2024-10-01 06:20:08.398655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.398676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:42.856 [2024-10-01 06:20:08.398686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.651 ms 00:27:42.856 [2024-10-01 06:20:08.398697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.398723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.398731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:42.856 [2024-10-01 06:20:08.398743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:42.856 [2024-10-01 06:20:08.398751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.398774] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:42.856 [2024-10-01 06:20:08.398794] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:42.856 [2024-10-01 06:20:08.398830] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:42.856 [2024-10-01 06:20:08.398869] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:42.856 [2024-10-01 06:20:08.398975] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:42.856 [2024-10-01 06:20:08.398985] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:42.856 [2024-10-01 06:20:08.398999] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:42.856 [2024-10-01 06:20:08.399010] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:42.856 [2024-10-01 06:20:08.399019] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:42.856 [2024-10-01 06:20:08.399027] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:42.856 [2024-10-01 06:20:08.399035] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:42.856 [2024-10-01 06:20:08.399045] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:42.856 [2024-10-01 06:20:08.399052] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:42.856 [2024-10-01 06:20:08.399061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.399069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:42.856 [2024-10-01 06:20:08.399077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:27:42.856 [2024-10-01 06:20:08.399086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.399173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.856 [2024-10-01 06:20:08.399181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:42.856 [2024-10-01 06:20:08.399189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:42.856 [2024-10-01 06:20:08.399196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.856 [2024-10-01 06:20:08.399301] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:42.856 [2024-10-01 06:20:08.399320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:42.856 [2024-10-01 06:20:08.399330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.856 [2024-10-01 06:20:08.399339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:42.856 [2024-10-01 06:20:08.399356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:42.856 [2024-10-01 06:20:08.399377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:42.856 [2024-10-01 06:20:08.399385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:42.856 [2024-10-01 06:20:08.399393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:42.856 [2024-10-01 06:20:08.399408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:42.856 [2024-10-01 06:20:08.399416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:42.856 [2024-10-01 06:20:08.399431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:42.856 [2024-10-01 06:20:08.399438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:42.856 [2024-10-01 06:20:08.399455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:42.856 [2024-10-01 06:20:08.399462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:42.856 [2024-10-01 06:20:08.399477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:42.856 [2024-10-01 06:20:08.399485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.856 [2024-10-01 06:20:08.399494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:42.856 [2024-10-01 06:20:08.399502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:42.856 [2024-10-01 06:20:08.399509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.856 [2024-10-01 06:20:08.399517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:42.856 [2024-10-01 06:20:08.399524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:42.856 [2024-10-01 06:20:08.399532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.856 [2024-10-01 06:20:08.399539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:42.856 [2024-10-01 06:20:08.399546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:42.856 [2024-10-01 06:20:08.399554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.856 [2024-10-01 06:20:08.399562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:42.856 [2024-10-01 06:20:08.399569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:42.856 [2024-10-01 06:20:08.399582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:42.856 [2024-10-01 06:20:08.399597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:42.856 [2024-10-01 06:20:08.399604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:42.856 [2024-10-01 06:20:08.399622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:42.856 [2024-10-01 06:20:08.399644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:42.856 [2024-10-01 06:20:08.399652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.856 [2024-10-01 06:20:08.399659] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:42.857 [2024-10-01 06:20:08.399670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:42.857 [2024-10-01 06:20:08.399678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.857 [2024-10-01 06:20:08.399686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.857 [2024-10-01 06:20:08.399694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:42.857 [2024-10-01 06:20:08.399702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:42.857 [2024-10-01 06:20:08.399710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:42.857 [2024-10-01 06:20:08.399717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:42.857 [2024-10-01 06:20:08.399725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:42.857 [2024-10-01 06:20:08.399732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:42.857 [2024-10-01 06:20:08.399742] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:42.857 [2024-10-01 06:20:08.399756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:42.857 [2024-10-01 06:20:08.399775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:42.857 [2024-10-01 06:20:08.399799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:42.857 [2024-10-01 06:20:08.399807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:42.857 [2024-10-01 06:20:08.399815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:42.857 [2024-10-01 06:20:08.399822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:42.857 [2024-10-01 06:20:08.399889] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:42.857 [2024-10-01 06:20:08.399901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.857 [2024-10-01 06:20:08.399917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:42.857 [2024-10-01 06:20:08.399924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:42.857 [2024-10-01 06:20:08.399931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:42.857 [2024-10-01 06:20:08.399944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.857 [2024-10-01 06:20:08.399952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:42.857 [2024-10-01 06:20:08.399960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.711 ms 00:27:42.857 [2024-10-01 06:20:08.399969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.857 [2024-10-01 06:20:08.400011] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:42.857 [2024-10-01 06:20:08.400020] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:46.166 [2024-10-01 06:20:11.163733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.166 [2024-10-01 06:20:11.164130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:46.166 [2024-10-01 06:20:11.164175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2763.708 ms 00:27:46.166 [2024-10-01 06:20:11.164194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.166 [2024-10-01 06:20:11.175277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.166 [2024-10-01 06:20:11.175482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:46.166 [2024-10-01 06:20:11.175503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.892 ms 00:27:46.166 [2024-10-01 06:20:11.175513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.166 [2024-10-01 06:20:11.175612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.166 [2024-10-01 06:20:11.175624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:46.166 [2024-10-01 06:20:11.175640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:46.166 [2024-10-01 06:20:11.175649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.166 [2024-10-01 06:20:11.196348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.166 [2024-10-01 06:20:11.196413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:46.166 [2024-10-01 06:20:11.196428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.656 ms 00:27:46.166 [2024-10-01 06:20:11.196437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.166 [2024-10-01 06:20:11.196499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.166 [2024-10-01 06:20:11.196509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:46.166 [2024-10-01 06:20:11.196518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:46.166 [2024-10-01 06:20:11.196526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.197059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.197078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:46.167 [2024-10-01 06:20:11.197099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.424 ms 00:27:46.167 [2024-10-01 06:20:11.197109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.197164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.197175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:46.167 [2024-10-01 06:20:11.197184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:46.167 [2024-10-01 06:20:11.197194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.204349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.204400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:46.167 [2024-10-01 06:20:11.204413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.133 ms 00:27:46.167 [2024-10-01 06:20:11.204424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.207465] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:46.167 [2024-10-01 06:20:11.207674] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:46.167 [2024-10-01 06:20:11.207696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.207718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:46.167 [2024-10-01 06:20:11.207730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.153 ms 00:27:46.167 [2024-10-01 06:20:11.207740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.213200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.213255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:46.167 [2024-10-01 06:20:11.213284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.160 ms 00:27:46.167 [2024-10-01 06:20:11.213297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.214766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.214928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:46.167 [2024-10-01 06:20:11.214944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.410 ms 00:27:46.167 [2024-10-01 06:20:11.214952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.216091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.216119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:46.167 [2024-10-01 06:20:11.216128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.106 ms 00:27:46.167 [2024-10-01 06:20:11.216135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.216457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.216475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:46.167 [2024-10-01 06:20:11.216485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:27:46.167 [2024-10-01 06:20:11.216492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.234317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.234537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:46.167 [2024-10-01 06:20:11.234565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.804 ms 00:27:46.167 [2024-10-01 06:20:11.234575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.242287] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:46.167 [2024-10-01 06:20:11.243115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.243146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:46.167 [2024-10-01 06:20:11.243158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.494 ms 00:27:46.167 [2024-10-01 06:20:11.243170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.243259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.243271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:46.167 [2024-10-01 06:20:11.243285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:46.167 [2024-10-01 06:20:11.243293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.243346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.243356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:46.167 [2024-10-01 06:20:11.243365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:46.167 [2024-10-01 06:20:11.243372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.243399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.243408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:46.167 [2024-10-01 06:20:11.243418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:46.167 [2024-10-01 06:20:11.243426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.243462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:46.167 [2024-10-01 06:20:11.243472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.243481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:46.167 [2024-10-01 06:20:11.243489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:46.167 [2024-10-01 06:20:11.243502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.246829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.246878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:46.167 [2024-10-01 06:20:11.246889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.306 ms 00:27:46.167 [2024-10-01 06:20:11.246897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.246971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.167 [2024-10-01 06:20:11.246983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:46.167 [2024-10-01 06:20:11.246992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:46.167 [2024-10-01 06:20:11.247000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.167 [2024-10-01 06:20:11.249244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2863.030 ms, result 0 00:27:46.167 [2024-10-01 06:20:11.263572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.167 [2024-10-01 06:20:11.279586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:46.167 [2024-10-01 06:20:11.287709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:46.431 06:20:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.431 06:20:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:46.431 06:20:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:46.431 06:20:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:46.431 06:20:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:46.691 [2024-10-01 06:20:12.072494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.691 [2024-10-01 06:20:12.072720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:46.691 [2024-10-01 06:20:12.072742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:46.691 [2024-10-01 06:20:12.072753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.691 [2024-10-01 06:20:12.072787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.691 [2024-10-01 06:20:12.072797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:46.691 [2024-10-01 06:20:12.072805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:46.691 [2024-10-01 06:20:12.072814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.691 [2024-10-01 06:20:12.072839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.691 [2024-10-01 06:20:12.072859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:46.691 [2024-10-01 06:20:12.072868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:46.691 [2024-10-01 06:20:12.072876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.691 [2024-10-01 06:20:12.072943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.443 ms, result 0 00:27:46.691 true 00:27:46.691 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:46.691 { 00:27:46.691 "name": "ftl", 00:27:46.691 "properties": [ 00:27:46.691 { 00:27:46.691 "name": "superblock_version", 00:27:46.691 "value": 5, 00:27:46.691 "read-only": true 00:27:46.691 }, 00:27:46.691 { 00:27:46.691 "name": "base_device", 00:27:46.691 "bands": [ 00:27:46.691 { 00:27:46.691 "id": 0, 00:27:46.691 "state": "CLOSED", 00:27:46.691 "validity": 1.0 00:27:46.691 }, 00:27:46.691 { 00:27:46.691 "id": 1, 00:27:46.691 "state": "CLOSED", 00:27:46.691 "validity": 1.0 00:27:46.691 }, 00:27:46.691 { 00:27:46.691 "id": 2, 00:27:46.691 "state": "CLOSED", 00:27:46.691 "validity": 0.007843137254901933 00:27:46.691 }, 00:27:46.692 { 00:27:46.692 "id": 3, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 4, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 5, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 6, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 7, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 8, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 9, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 10, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 11, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 12, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 13, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 14, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 15, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 16, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 17, 00:27:46.692 "state": "FREE", 00:27:46.692 "validity": 0.0 00:27:46.692 } 00:27:46.692 ], 00:27:46.692 "read-only": true 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "name": "cache_device", 00:27:46.692 "type": "bdev", 00:27:46.692 "chunks": [ 00:27:46.692 { 00:27:46.692 "id": 0, 00:27:46.692 "state": "INACTIVE", 00:27:46.692 "utilization": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 1, 00:27:46.692 "state": "OPEN", 00:27:46.692 "utilization": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 2, 00:27:46.692 "state": "OPEN", 00:27:46.692 "utilization": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 3, 00:27:46.692 "state": "FREE", 00:27:46.692 "utilization": 0.0 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "id": 4, 00:27:46.692 "state": "FREE", 00:27:46.692 "utilization": 0.0 00:27:46.692 } 00:27:46.692 ], 00:27:46.692 "read-only": true 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "name": "verbose_mode", 00:27:46.692 "value": true, 00:27:46.692 "unit": "", 00:27:46.692 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:46.692 }, 00:27:46.692 { 00:27:46.692 "name": "prep_upgrade_on_shutdown", 00:27:46.692 "value": false, 00:27:46.692 "unit": "", 00:27:46.692 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:46.692 } 00:27:46.692 ] 00:27:46.692 } 00:27:46.951 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:46.951 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:46.951 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:46.951 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:46.951 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:46.951 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:46.951 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:46.951 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:47.212 Validate MD5 checksum, iteration 1 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:47.212 06:20:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.212 [2024-10-01 06:20:12.805526] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:27:47.212 [2024-10-01 06:20:12.805681] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92510 ] 00:27:47.471 [2024-10-01 06:20:12.944103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.471 [2024-10-01 06:20:12.982645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.363  Copying: 628/1024 [MB] (628 MBps) Copying: 1024/1024 [MB] (average 625 MBps) 00:27:50.363 00:27:50.363 06:20:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:50.363 06:20:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4ae44bb90f685fe8912f95b86e77c4dd 00:27:52.903 Validate MD5 checksum, iteration 2 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4ae44bb90f685fe8912f95b86e77c4dd != \4\a\e\4\4\b\b\9\0\f\6\8\5\f\e\8\9\1\2\f\9\5\b\8\6\e\7\7\c\4\d\d ]] 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:52.903 06:20:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:52.903 [2024-10-01 06:20:18.024106] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:27:52.903 [2024-10-01 06:20:18.024244] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92566 ] 00:27:52.903 [2024-10-01 06:20:18.158163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.903 [2024-10-01 06:20:18.201810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.829  Copying: 652/1024 [MB] (652 MBps) Copying: 1024/1024 [MB] (average 648 MBps) 00:27:59.829 00:27:59.829 06:20:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:59.829 06:20:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=bf415f388566540a75084d8d4e963933 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ bf415f388566540a75084d8d4e963933 != \b\f\4\1\5\f\3\8\8\5\6\6\5\4\0\a\7\5\0\8\4\d\8\d\4\e\9\6\3\9\3\3 ]] 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92441 ]] 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92441 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:01.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92668 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92668 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92668 ']' 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.746 06:20:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:01.746 [2024-10-01 06:20:26.990114] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:28:01.746 [2024-10-01 06:20:26.990233] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92668 ] 00:28:01.746 [2024-10-01 06:20:27.126432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.746 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92441 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:01.746 [2024-10-01 06:20:27.169611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.008 [2024-10-01 06:20:27.471024] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:02.008 [2024-10-01 06:20:27.471097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:02.008 [2024-10-01 06:20:27.618536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.008 [2024-10-01 06:20:27.618592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:02.008 [2024-10-01 06:20:27.618608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:02.008 [2024-10-01 06:20:27.618617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.008 [2024-10-01 06:20:27.618675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.008 [2024-10-01 06:20:27.618686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:02.008 [2024-10-01 06:20:27.618699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:02.008 [2024-10-01 06:20:27.618706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.008 [2024-10-01 06:20:27.618730] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:02.008 [2024-10-01 06:20:27.618993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:02.008 [2024-10-01 06:20:27.619010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.008 [2024-10-01 06:20:27.619018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:02.008 [2024-10-01 06:20:27.619034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:28:02.008 [2024-10-01 06:20:27.619041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.008 [2024-10-01 06:20:27.619305] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:02.270 [2024-10-01 06:20:27.624182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.624220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:02.270 [2024-10-01 06:20:27.624231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.877 ms 00:28:02.270 [2024-10-01 06:20:27.624243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.625278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.625413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:02.270 [2024-10-01 06:20:27.625436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:02.270 [2024-10-01 06:20:27.625445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.625717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.625731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:02.270 [2024-10-01 06:20:27.625742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.223 ms 00:28:02.270 [2024-10-01 06:20:27.625750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.625785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.625793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:02.270 [2024-10-01 06:20:27.625801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:02.270 [2024-10-01 06:20:27.625814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.625841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.625865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:02.270 [2024-10-01 06:20:27.625873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:02.270 [2024-10-01 06:20:27.625883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.625907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:02.270 [2024-10-01 06:20:27.626761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.626776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:02.270 [2024-10-01 06:20:27.626785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.859 ms 00:28:02.270 [2024-10-01 06:20:27.626793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.626825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.626834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:02.270 [2024-10-01 06:20:27.626841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:02.270 [2024-10-01 06:20:27.626866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.626898] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:02.270 [2024-10-01 06:20:27.626918] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:02.270 [2024-10-01 06:20:27.626957] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:02.270 [2024-10-01 06:20:27.626973] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:02.270 [2024-10-01 06:20:27.627076] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:02.270 [2024-10-01 06:20:27.627090] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:02.270 [2024-10-01 06:20:27.627104] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:02.270 [2024-10-01 06:20:27.627114] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:02.270 [2024-10-01 06:20:27.627124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:02.270 [2024-10-01 06:20:27.627132] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:02.270 [2024-10-01 06:20:27.627140] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:02.270 [2024-10-01 06:20:27.627147] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:02.270 [2024-10-01 06:20:27.627155] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:02.270 [2024-10-01 06:20:27.627164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.627171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:02.270 [2024-10-01 06:20:27.627182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:28:02.270 [2024-10-01 06:20:27.627196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.627284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.270 [2024-10-01 06:20:27.627292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:02.270 [2024-10-01 06:20:27.627299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:28:02.270 [2024-10-01 06:20:27.627309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.270 [2024-10-01 06:20:27.627410] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:02.270 [2024-10-01 06:20:27.627420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:02.270 [2024-10-01 06:20:27.627429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:02.270 [2024-10-01 06:20:27.627438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.270 [2024-10-01 06:20:27.627451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:02.270 [2024-10-01 06:20:27.627458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:02.270 [2024-10-01 06:20:27.627466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:02.270 [2024-10-01 06:20:27.627474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:02.270 [2024-10-01 06:20:27.627482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:02.270 [2024-10-01 06:20:27.627489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.270 [2024-10-01 06:20:27.627497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:02.270 [2024-10-01 06:20:27.627504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:02.270 [2024-10-01 06:20:27.627511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.270 [2024-10-01 06:20:27.627520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:02.270 [2024-10-01 06:20:27.627528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:02.270 [2024-10-01 06:20:27.627545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.270 [2024-10-01 06:20:27.627553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:02.270 [2024-10-01 06:20:27.627560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:02.270 [2024-10-01 06:20:27.627568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.270 [2024-10-01 06:20:27.627576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:02.270 [2024-10-01 06:20:27.627584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:02.270 [2024-10-01 06:20:27.627591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:02.270 [2024-10-01 06:20:27.627598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:02.270 [2024-10-01 06:20:27.627606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:02.270 [2024-10-01 06:20:27.627613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:02.270 [2024-10-01 06:20:27.627620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:02.270 [2024-10-01 06:20:27.627628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:02.270 [2024-10-01 06:20:27.627635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:02.270 [2024-10-01 06:20:27.627643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:02.270 [2024-10-01 06:20:27.627651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:02.270 [2024-10-01 06:20:27.627658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:02.270 [2024-10-01 06:20:27.627670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:02.270 [2024-10-01 06:20:27.627678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:02.270 [2024-10-01 06:20:27.627685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.271 [2024-10-01 06:20:27.627692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:02.271 [2024-10-01 06:20:27.627700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:02.271 [2024-10-01 06:20:27.627708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.271 [2024-10-01 06:20:27.627715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:02.271 [2024-10-01 06:20:27.627723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:02.271 [2024-10-01 06:20:27.627730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.271 [2024-10-01 06:20:27.627738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:02.271 [2024-10-01 06:20:27.627745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:02.271 [2024-10-01 06:20:27.627753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.271 [2024-10-01 06:20:27.627760] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:02.271 [2024-10-01 06:20:27.627768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:02.271 [2024-10-01 06:20:27.627782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:02.271 [2024-10-01 06:20:27.627790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:02.271 [2024-10-01 06:20:27.627806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:02.271 [2024-10-01 06:20:27.627815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:02.271 [2024-10-01 06:20:27.627822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:02.271 [2024-10-01 06:20:27.627830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:02.271 [2024-10-01 06:20:27.627838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:02.271 [2024-10-01 06:20:27.627857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:02.271 [2024-10-01 06:20:27.627867] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:02.271 [2024-10-01 06:20:27.627877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.627886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:02.271 [2024-10-01 06:20:27.627895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.627903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.627911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:02.271 [2024-10-01 06:20:27.627919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:02.271 [2024-10-01 06:20:27.627927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:02.271 [2024-10-01 06:20:27.627936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:02.271 [2024-10-01 06:20:27.627945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.627956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.627964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.627973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.627981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.627998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.628007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:02.271 [2024-10-01 06:20:27.628015] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:02.271 [2024-10-01 06:20:27.628024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.628033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:02.271 [2024-10-01 06:20:27.628042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:02.271 [2024-10-01 06:20:27.628055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:02.271 [2024-10-01 06:20:27.628063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:02.271 [2024-10-01 06:20:27.628071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.628080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:02.271 [2024-10-01 06:20:27.628089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.730 ms 00:28:02.271 [2024-10-01 06:20:27.628097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.636924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.636963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:02.271 [2024-10-01 06:20:27.636975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.769 ms 00:28:02.271 [2024-10-01 06:20:27.636983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.637036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.637047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:02.271 [2024-10-01 06:20:27.637056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:02.271 [2024-10-01 06:20:27.637064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.661547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.661625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:02.271 [2024-10-01 06:20:27.661650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.398 ms 00:28:02.271 [2024-10-01 06:20:27.661665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.661766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.661784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:02.271 [2024-10-01 06:20:27.661812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:02.271 [2024-10-01 06:20:27.661827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.662039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.662065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:02.271 [2024-10-01 06:20:27.662081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:28:02.271 [2024-10-01 06:20:27.662099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.662179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.662197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:02.271 [2024-10-01 06:20:27.662214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:02.271 [2024-10-01 06:20:27.662230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.670678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.670715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:02.271 [2024-10-01 06:20:27.670727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.410 ms 00:28:02.271 [2024-10-01 06:20:27.670735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.670863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.670875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:02.271 [2024-10-01 06:20:27.670884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:02.271 [2024-10-01 06:20:27.670892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.675997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.676035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:02.271 [2024-10-01 06:20:27.676046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.085 ms 00:28:02.271 [2024-10-01 06:20:27.676055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.677437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.677474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:02.271 [2024-10-01 06:20:27.677486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:28:02.271 [2024-10-01 06:20:27.677495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.694581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.694640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:02.271 [2024-10-01 06:20:27.694653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.059 ms 00:28:02.271 [2024-10-01 06:20:27.694666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.694805] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:02.271 [2024-10-01 06:20:27.694917] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:02.271 [2024-10-01 06:20:27.695016] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:02.271 [2024-10-01 06:20:27.695114] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:02.271 [2024-10-01 06:20:27.695123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.695132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:02.271 [2024-10-01 06:20:27.695141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.411 ms 00:28:02.271 [2024-10-01 06:20:27.695149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.695211] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:02.271 [2024-10-01 06:20:27.695227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.695236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:02.271 [2024-10-01 06:20:27.695245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:02.271 [2024-10-01 06:20:27.695254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.699308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.699353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:02.271 [2024-10-01 06:20:27.699365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.029 ms 00:28:02.271 [2024-10-01 06:20:27.699375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.700025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.700057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:02.271 [2024-10-01 06:20:27.700067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:02.271 [2024-10-01 06:20:27.700076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.271 [2024-10-01 06:20:27.700155] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:02.271 [2024-10-01 06:20:27.700330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.271 [2024-10-01 06:20:27.700342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:02.271 [2024-10-01 06:20:27.700352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:28:02.271 [2024-10-01 06:20:27.700360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.213 [2024-10-01 06:20:28.552286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.213 [2024-10-01 06:20:28.552368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:03.213 [2024-10-01 06:20:28.552383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 851.608 ms 00:28:03.213 [2024-10-01 06:20:28.552392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.213 [2024-10-01 06:20:28.554171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.213 [2024-10-01 06:20:28.554216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:03.213 [2024-10-01 06:20:28.554228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.378 ms 00:28:03.213 [2024-10-01 06:20:28.554236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.213 [2024-10-01 06:20:28.554873] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:03.213 [2024-10-01 06:20:28.554903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.213 [2024-10-01 06:20:28.554913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:03.213 [2024-10-01 06:20:28.554922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:28:03.213 [2024-10-01 06:20:28.554930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.213 [2024-10-01 06:20:28.554961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.213 [2024-10-01 06:20:28.554970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:03.213 [2024-10-01 06:20:28.554978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.213 [2024-10-01 06:20:28.554986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.213 [2024-10-01 06:20:28.555024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 854.868 ms, result 0 00:28:03.213 [2024-10-01 06:20:28.555071] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:03.213 [2024-10-01 06:20:28.555129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.213 [2024-10-01 06:20:28.555139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:03.213 [2024-10-01 06:20:28.555147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:28:03.213 [2024-10-01 06:20:28.555154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.430436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.430501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:04.160 [2024-10-01 06:20:29.430517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 874.870 ms 00:28:04.160 [2024-10-01 06:20:29.430526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.432309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.432343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:04.160 [2024-10-01 06:20:29.432353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.429 ms 00:28:04.160 [2024-10-01 06:20:29.432362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.433186] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:04.160 [2024-10-01 06:20:29.433213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.433222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:04.160 [2024-10-01 06:20:29.433231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.824 ms 00:28:04.160 [2024-10-01 06:20:29.433238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.433268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.433277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:04.160 [2024-10-01 06:20:29.433285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.160 [2024-10-01 06:20:29.433293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.433329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 878.259 ms, result 0 00:28:04.160 [2024-10-01 06:20:29.433374] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:04.160 [2024-10-01 06:20:29.433385] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:04.160 [2024-10-01 06:20:29.433395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.433404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:04.160 [2024-10-01 06:20:29.433412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1733.261 ms 00:28:04.160 [2024-10-01 06:20:29.433420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.433450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.433459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:04.160 [2024-10-01 06:20:29.433471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:04.160 [2024-10-01 06:20:29.433479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.441770] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:04.160 [2024-10-01 06:20:29.441882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.441894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:04.160 [2024-10-01 06:20:29.441905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.387 ms 00:28:04.160 [2024-10-01 06:20:29.441913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.442598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.442621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:04.160 [2024-10-01 06:20:29.442630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.615 ms 00:28:04.160 [2024-10-01 06:20:29.442638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.444874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.444913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:04.160 [2024-10-01 06:20:29.444922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.217 ms 00:28:04.160 [2024-10-01 06:20:29.444930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.444973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.444983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:04.160 [2024-10-01 06:20:29.444992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:04.160 [2024-10-01 06:20:29.444999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.445116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.445133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:04.160 [2024-10-01 06:20:29.445142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:04.160 [2024-10-01 06:20:29.445151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.445174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.445183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:04.160 [2024-10-01 06:20:29.445191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:04.160 [2024-10-01 06:20:29.445199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.445230] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:04.160 [2024-10-01 06:20:29.445240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.445251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:04.160 [2024-10-01 06:20:29.445259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:04.160 [2024-10-01 06:20:29.445267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.445321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.160 [2024-10-01 06:20:29.445338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:04.160 [2024-10-01 06:20:29.445346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:04.160 [2024-10-01 06:20:29.445354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.160 [2024-10-01 06:20:29.446604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1827.636 ms, result 0 00:28:04.160 [2024-10-01 06:20:29.462313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.160 [2024-10-01 06:20:29.478316] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:04.160 [2024-10-01 06:20:29.486423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:04.160 Validate MD5 checksum, iteration 1 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:04.160 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:04.161 06:20:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:04.422 [2024-10-01 06:20:29.777493] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:28:04.422 [2024-10-01 06:20:29.777692] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92698 ] 00:28:04.422 [2024-10-01 06:20:29.921026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.422 [2024-10-01 06:20:29.964427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.051  Copying: 599/1024 [MB] (599 MBps) Copying: 1024/1024 [MB] (average 597 MBps) 00:28:10.051 00:28:10.051 06:20:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:10.051 06:20:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:12.009 Validate MD5 checksum, iteration 2 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4ae44bb90f685fe8912f95b86e77c4dd 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4ae44bb90f685fe8912f95b86e77c4dd != \4\a\e\4\4\b\b\9\0\f\6\8\5\f\e\8\9\1\2\f\9\5\b\8\6\e\7\7\c\4\d\d ]] 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:12.009 06:20:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:12.009 [2024-10-01 06:20:37.226793] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:28:12.009 [2024-10-01 06:20:37.226922] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92786 ] 00:28:12.009 [2024-10-01 06:20:37.366205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.009 [2024-10-01 06:20:37.411301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.726  Copying: 520/1024 [MB] (520 MBps) Copying: 1024/1024 [MB] (average 555 MBps) 00:28:15.726 00:28:15.726 06:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:15.726 06:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=bf415f388566540a75084d8d4e963933 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ bf415f388566540a75084d8d4e963933 != \b\f\4\1\5\f\3\8\8\5\6\6\5\4\0\a\7\5\0\8\4\d\8\d\4\e\9\6\3\9\3\3 ]] 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:17.644 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:17.906 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:17.906 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:17.906 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:17.906 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:17.906 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92668 ]] 00:28:17.906 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92668 00:28:17.906 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92668 ']' 00:28:17.906 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92668 00:28:17.907 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:17.907 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:17.907 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92668 00:28:17.907 killing process with pid 92668 00:28:17.907 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:17.907 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:17.907 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92668' 00:28:17.907 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92668 00:28:17.907 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92668 00:28:17.907 [2024-10-01 06:20:43.508107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:17.907 [2024-10-01 06:20:43.512233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.512275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:17.907 [2024-10-01 06:20:43.512290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:17.907 [2024-10-01 06:20:43.512298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.907 [2024-10-01 06:20:43.512320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:17.907 [2024-10-01 06:20:43.512875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.512892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:17.907 [2024-10-01 06:20:43.512902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:28:17.907 [2024-10-01 06:20:43.512910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.907 [2024-10-01 06:20:43.513147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.513159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:17.907 [2024-10-01 06:20:43.513168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:28:17.907 [2024-10-01 06:20:43.513178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.907 [2024-10-01 06:20:43.514715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.514744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:17.907 [2024-10-01 06:20:43.514755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.520 ms 00:28:17.907 [2024-10-01 06:20:43.514763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.907 [2024-10-01 06:20:43.516107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.516185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:17.907 [2024-10-01 06:20:43.516323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.313 ms 00:28:17.907 [2024-10-01 06:20:43.516348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.907 [2024-10-01 06:20:43.518131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.518234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:17.907 [2024-10-01 06:20:43.518286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.719 ms 00:28:17.907 [2024-10-01 06:20:43.518328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.907 [2024-10-01 06:20:43.519709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.519819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:17.907 [2024-10-01 06:20:43.519880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.291 ms 00:28:17.907 [2024-10-01 06:20:43.519953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.907 [2024-10-01 06:20:43.520042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.520184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:17.907 [2024-10-01 06:20:43.520255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:17.907 [2024-10-01 06:20:43.520285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.907 [2024-10-01 06:20:43.522038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.907 [2024-10-01 06:20:43.522131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:17.907 [2024-10-01 06:20:43.522180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.708 ms 00:28:17.907 [2024-10-01 06:20:43.522201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.167 [2024-10-01 06:20:43.523877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.168 [2024-10-01 06:20:43.523966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:18.168 [2024-10-01 06:20:43.524094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.620 ms 00:28:18.168 [2024-10-01 06:20:43.524105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.525591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.168 [2024-10-01 06:20:43.525628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:18.168 [2024-10-01 06:20:43.525640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.414 ms 00:28:18.168 [2024-10-01 06:20:43.525648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.527658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.168 [2024-10-01 06:20:43.527752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:18.168 [2024-10-01 06:20:43.527765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.952 ms 00:28:18.168 [2024-10-01 06:20:43.527773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.527800] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:18.168 [2024-10-01 06:20:43.527815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:18.168 [2024-10-01 06:20:43.527829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:18.168 [2024-10-01 06:20:43.527838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:18.168 [2024-10-01 06:20:43.527857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:18.168 [2024-10-01 06:20:43.527993] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:18.168 [2024-10-01 06:20:43.528002] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: db08d075-0ca7-4eb3-884c-ec3dac13a152 00:28:18.168 [2024-10-01 06:20:43.528010] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:18.168 [2024-10-01 06:20:43.528018] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:18.168 [2024-10-01 06:20:43.528025] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:18.168 [2024-10-01 06:20:43.528033] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:18.168 [2024-10-01 06:20:43.528040] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:18.168 [2024-10-01 06:20:43.528048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:18.168 [2024-10-01 06:20:43.528055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:18.168 [2024-10-01 06:20:43.528062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:18.168 [2024-10-01 06:20:43.528069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:18.168 [2024-10-01 06:20:43.528076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.168 [2024-10-01 06:20:43.528085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:18.168 [2024-10-01 06:20:43.528093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:28:18.168 [2024-10-01 06:20:43.528103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.529905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.168 [2024-10-01 06:20:43.529931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:18.168 [2024-10-01 06:20:43.529940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.771 ms 00:28:18.168 [2024-10-01 06:20:43.529948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.530043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.168 [2024-10-01 06:20:43.530051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:18.168 [2024-10-01 06:20:43.530063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:28:18.168 [2024-10-01 06:20:43.530070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.536665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.536776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:18.168 [2024-10-01 06:20:43.536831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.536867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.536911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.537101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:18.168 [2024-10-01 06:20:43.537131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.537151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.537249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.537313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:18.168 [2024-10-01 06:20:43.537358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.537380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.537413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.537469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:18.168 [2024-10-01 06:20:43.537489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.537556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.549032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.549210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:18.168 [2024-10-01 06:20:43.549264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.549287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.558305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.558458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:18.168 [2024-10-01 06:20:43.558480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.558488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.558567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.558578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:18.168 [2024-10-01 06:20:43.558587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.558595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.558625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.558634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:18.168 [2024-10-01 06:20:43.558646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.558659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.558730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.558740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:18.168 [2024-10-01 06:20:43.558748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.558756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.558784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.558793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:18.168 [2024-10-01 06:20:43.558801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.558808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.168 [2024-10-01 06:20:43.558870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.168 [2024-10-01 06:20:43.558881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:18.168 [2024-10-01 06:20:43.558889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.168 [2024-10-01 06:20:43.558897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.169 [2024-10-01 06:20:43.558943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:18.169 [2024-10-01 06:20:43.558953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:18.169 [2024-10-01 06:20:43.558961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:18.169 [2024-10-01 06:20:43.558968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.169 [2024-10-01 06:20:43.559109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 46.846 ms, result 0 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:18.429 Remove shared memory files 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92441 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:18.429 ************************************ 00:28:18.429 END TEST ftl_upgrade_shutdown 00:28:18.429 ************************************ 00:28:18.429 00:28:18.429 real 1m25.132s 00:28:18.429 user 1m51.894s 00:28:18.429 sys 0m20.099s 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:18.429 06:20:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:18.429 06:20:43 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:18.429 06:20:43 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:18.429 06:20:43 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:18.429 06:20:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:18.429 06:20:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:18.429 ************************************ 00:28:18.429 START TEST ftl_restore_fast 00:28:18.429 ************************************ 00:28:18.429 06:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:18.429 * Looking for test storage... 00:28:18.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:18.429 06:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:18.429 06:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:18.429 06:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.692 --rc genhtml_branch_coverage=1 00:28:18.692 --rc genhtml_function_coverage=1 00:28:18.692 --rc genhtml_legend=1 00:28:18.692 --rc geninfo_all_blocks=1 00:28:18.692 --rc geninfo_unexecuted_blocks=1 00:28:18.692 00:28:18.692 ' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.692 --rc genhtml_branch_coverage=1 00:28:18.692 --rc genhtml_function_coverage=1 00:28:18.692 --rc genhtml_legend=1 00:28:18.692 --rc geninfo_all_blocks=1 00:28:18.692 --rc geninfo_unexecuted_blocks=1 00:28:18.692 00:28:18.692 ' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.692 --rc genhtml_branch_coverage=1 00:28:18.692 --rc genhtml_function_coverage=1 00:28:18.692 --rc genhtml_legend=1 00:28:18.692 --rc geninfo_all_blocks=1 00:28:18.692 --rc geninfo_unexecuted_blocks=1 00:28:18.692 00:28:18.692 ' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.692 --rc genhtml_branch_coverage=1 00:28:18.692 --rc genhtml_function_coverage=1 00:28:18.692 --rc genhtml_legend=1 00:28:18.692 --rc geninfo_all_blocks=1 00:28:18.692 --rc geninfo_unexecuted_blocks=1 00:28:18.692 00:28:18.692 ' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:18.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.F6adQJ9DN9 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92926 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92926 00:28:18.692 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 92926 ']' 00:28:18.693 06:20:44 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:18.693 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.693 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.693 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.693 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.693 06:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:18.693 [2024-10-01 06:20:44.208152] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:28:18.693 [2024-10-01 06:20:44.208343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92926 ] 00:28:18.954 [2024-10-01 06:20:44.350837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.954 [2024-10-01 06:20:44.394728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.529 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.529 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:19.529 06:20:45 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:19.529 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:19.529 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:19.529 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:19.529 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:19.529 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:19.790 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:19.790 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:19.790 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:19.790 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:19.790 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:19.790 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:19.790 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:19.790 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:20.105 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:20.105 { 00:28:20.105 "name": "nvme0n1", 00:28:20.105 "aliases": [ 00:28:20.105 "c63df612-2f12-4fe4-b6bc-9f587b9fc124" 00:28:20.105 ], 00:28:20.105 "product_name": "NVMe disk", 00:28:20.105 "block_size": 4096, 00:28:20.105 "num_blocks": 1310720, 00:28:20.105 "uuid": "c63df612-2f12-4fe4-b6bc-9f587b9fc124", 00:28:20.105 "numa_id": -1, 00:28:20.105 "assigned_rate_limits": { 00:28:20.105 "rw_ios_per_sec": 0, 00:28:20.105 "rw_mbytes_per_sec": 0, 00:28:20.105 "r_mbytes_per_sec": 0, 00:28:20.105 "w_mbytes_per_sec": 0 00:28:20.105 }, 00:28:20.105 "claimed": true, 00:28:20.105 "claim_type": "read_many_write_one", 00:28:20.105 "zoned": false, 00:28:20.105 "supported_io_types": { 00:28:20.105 "read": true, 00:28:20.105 "write": true, 00:28:20.105 "unmap": true, 00:28:20.105 "flush": true, 00:28:20.105 "reset": true, 00:28:20.105 "nvme_admin": true, 00:28:20.105 "nvme_io": true, 00:28:20.105 "nvme_io_md": false, 00:28:20.105 "write_zeroes": true, 00:28:20.105 "zcopy": false, 00:28:20.105 "get_zone_info": false, 00:28:20.105 "zone_management": false, 00:28:20.105 "zone_append": false, 00:28:20.105 "compare": true, 00:28:20.105 "compare_and_write": false, 00:28:20.105 "abort": true, 00:28:20.105 "seek_hole": false, 00:28:20.105 "seek_data": false, 00:28:20.105 "copy": true, 00:28:20.105 "nvme_iov_md": false 00:28:20.105 }, 00:28:20.105 "driver_specific": { 00:28:20.105 "nvme": [ 00:28:20.105 { 00:28:20.105 "pci_address": "0000:00:11.0", 00:28:20.105 "trid": { 00:28:20.105 "trtype": "PCIe", 00:28:20.105 "traddr": "0000:00:11.0" 00:28:20.105 }, 00:28:20.105 "ctrlr_data": { 00:28:20.105 "cntlid": 0, 00:28:20.105 "vendor_id": "0x1b36", 00:28:20.105 "model_number": "QEMU NVMe Ctrl", 00:28:20.105 "serial_number": "12341", 00:28:20.105 "firmware_revision": "8.0.0", 00:28:20.105 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:20.105 "oacs": { 00:28:20.105 "security": 0, 00:28:20.105 "format": 1, 00:28:20.105 "firmware": 0, 00:28:20.106 "ns_manage": 1 00:28:20.106 }, 00:28:20.106 "multi_ctrlr": false, 00:28:20.106 "ana_reporting": false 00:28:20.106 }, 00:28:20.106 "vs": { 00:28:20.106 "nvme_version": "1.4" 00:28:20.106 }, 00:28:20.106 "ns_data": { 00:28:20.106 "id": 1, 00:28:20.106 "can_share": false 00:28:20.106 } 00:28:20.106 } 00:28:20.106 ], 00:28:20.106 "mp_policy": "active_passive" 00:28:20.106 } 00:28:20.106 } 00:28:20.106 ]' 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:20.106 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:20.373 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=6d19a4a8-7865-4838-b6d5-4164bf11e603 00:28:20.373 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:20.373 06:20:45 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d19a4a8-7865-4838-b6d5-4164bf11e603 00:28:20.635 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=93750eec-b0c0-4122-bf96-beaf62fbbd4c 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 93750eec-b0c0-4122-bf96-beaf62fbbd4c 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=04122485-7be4-496b-8089-db8d3f174ca6 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 04122485-7be4-496b-8089-db8d3f174ca6 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=04122485-7be4-496b-8089-db8d3f174ca6 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 04122485-7be4-496b-8089-db8d3f174ca6 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=04122485-7be4-496b-8089-db8d3f174ca6 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:20.897 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 04122485-7be4-496b-8089-db8d3f174ca6 00:28:21.157 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:21.157 { 00:28:21.157 "name": "04122485-7be4-496b-8089-db8d3f174ca6", 00:28:21.157 "aliases": [ 00:28:21.157 "lvs/nvme0n1p0" 00:28:21.157 ], 00:28:21.157 "product_name": "Logical Volume", 00:28:21.157 "block_size": 4096, 00:28:21.157 "num_blocks": 26476544, 00:28:21.157 "uuid": "04122485-7be4-496b-8089-db8d3f174ca6", 00:28:21.157 "assigned_rate_limits": { 00:28:21.157 "rw_ios_per_sec": 0, 00:28:21.157 "rw_mbytes_per_sec": 0, 00:28:21.157 "r_mbytes_per_sec": 0, 00:28:21.157 "w_mbytes_per_sec": 0 00:28:21.157 }, 00:28:21.157 "claimed": false, 00:28:21.157 "zoned": false, 00:28:21.157 "supported_io_types": { 00:28:21.158 "read": true, 00:28:21.158 "write": true, 00:28:21.158 "unmap": true, 00:28:21.158 "flush": false, 00:28:21.158 "reset": true, 00:28:21.158 "nvme_admin": false, 00:28:21.158 "nvme_io": false, 00:28:21.158 "nvme_io_md": false, 00:28:21.158 "write_zeroes": true, 00:28:21.158 "zcopy": false, 00:28:21.158 "get_zone_info": false, 00:28:21.158 "zone_management": false, 00:28:21.158 "zone_append": false, 00:28:21.158 "compare": false, 00:28:21.158 "compare_and_write": false, 00:28:21.158 "abort": false, 00:28:21.158 "seek_hole": true, 00:28:21.158 "seek_data": true, 00:28:21.158 "copy": false, 00:28:21.158 "nvme_iov_md": false 00:28:21.158 }, 00:28:21.158 "driver_specific": { 00:28:21.158 "lvol": { 00:28:21.158 "lvol_store_uuid": "93750eec-b0c0-4122-bf96-beaf62fbbd4c", 00:28:21.158 "base_bdev": "nvme0n1", 00:28:21.158 "thin_provision": true, 00:28:21.158 "num_allocated_clusters": 0, 00:28:21.158 "snapshot": false, 00:28:21.158 "clone": false, 00:28:21.158 "esnap_clone": false 00:28:21.158 } 00:28:21.158 } 00:28:21.158 } 00:28:21.158 ]' 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:21.158 06:20:46 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:21.419 06:20:47 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:21.419 06:20:47 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:21.419 06:20:47 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 04122485-7be4-496b-8089-db8d3f174ca6 00:28:21.419 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=04122485-7be4-496b-8089-db8d3f174ca6 00:28:21.419 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:21.419 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:21.419 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:21.419 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 04122485-7be4-496b-8089-db8d3f174ca6 00:28:21.680 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:21.680 { 00:28:21.680 "name": "04122485-7be4-496b-8089-db8d3f174ca6", 00:28:21.680 "aliases": [ 00:28:21.680 "lvs/nvme0n1p0" 00:28:21.680 ], 00:28:21.680 "product_name": "Logical Volume", 00:28:21.680 "block_size": 4096, 00:28:21.680 "num_blocks": 26476544, 00:28:21.680 "uuid": "04122485-7be4-496b-8089-db8d3f174ca6", 00:28:21.680 "assigned_rate_limits": { 00:28:21.680 "rw_ios_per_sec": 0, 00:28:21.680 "rw_mbytes_per_sec": 0, 00:28:21.680 "r_mbytes_per_sec": 0, 00:28:21.680 "w_mbytes_per_sec": 0 00:28:21.680 }, 00:28:21.680 "claimed": false, 00:28:21.680 "zoned": false, 00:28:21.680 "supported_io_types": { 00:28:21.680 "read": true, 00:28:21.680 "write": true, 00:28:21.680 "unmap": true, 00:28:21.680 "flush": false, 00:28:21.680 "reset": true, 00:28:21.680 "nvme_admin": false, 00:28:21.680 "nvme_io": false, 00:28:21.680 "nvme_io_md": false, 00:28:21.680 "write_zeroes": true, 00:28:21.680 "zcopy": false, 00:28:21.680 "get_zone_info": false, 00:28:21.680 "zone_management": false, 00:28:21.680 "zone_append": false, 00:28:21.680 "compare": false, 00:28:21.680 "compare_and_write": false, 00:28:21.680 "abort": false, 00:28:21.680 "seek_hole": true, 00:28:21.680 "seek_data": true, 00:28:21.680 "copy": false, 00:28:21.680 "nvme_iov_md": false 00:28:21.680 }, 00:28:21.680 "driver_specific": { 00:28:21.680 "lvol": { 00:28:21.680 "lvol_store_uuid": "93750eec-b0c0-4122-bf96-beaf62fbbd4c", 00:28:21.680 "base_bdev": "nvme0n1", 00:28:21.680 "thin_provision": true, 00:28:21.680 "num_allocated_clusters": 0, 00:28:21.680 "snapshot": false, 00:28:21.680 "clone": false, 00:28:21.680 "esnap_clone": false 00:28:21.680 } 00:28:21.680 } 00:28:21.680 } 00:28:21.680 ]' 00:28:21.680 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:21.680 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:21.680 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:21.940 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:21.940 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:21.941 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:21.941 06:20:47 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:21.941 06:20:47 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 04122485-7be4-496b-8089-db8d3f174ca6 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=04122485-7be4-496b-8089-db8d3f174ca6 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 04122485-7be4-496b-8089-db8d3f174ca6 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:22.202 { 00:28:22.202 "name": "04122485-7be4-496b-8089-db8d3f174ca6", 00:28:22.202 "aliases": [ 00:28:22.202 "lvs/nvme0n1p0" 00:28:22.202 ], 00:28:22.202 "product_name": "Logical Volume", 00:28:22.202 "block_size": 4096, 00:28:22.202 "num_blocks": 26476544, 00:28:22.202 "uuid": "04122485-7be4-496b-8089-db8d3f174ca6", 00:28:22.202 "assigned_rate_limits": { 00:28:22.202 "rw_ios_per_sec": 0, 00:28:22.202 "rw_mbytes_per_sec": 0, 00:28:22.202 "r_mbytes_per_sec": 0, 00:28:22.202 "w_mbytes_per_sec": 0 00:28:22.202 }, 00:28:22.202 "claimed": false, 00:28:22.202 "zoned": false, 00:28:22.202 "supported_io_types": { 00:28:22.202 "read": true, 00:28:22.202 "write": true, 00:28:22.202 "unmap": true, 00:28:22.202 "flush": false, 00:28:22.202 "reset": true, 00:28:22.202 "nvme_admin": false, 00:28:22.202 "nvme_io": false, 00:28:22.202 "nvme_io_md": false, 00:28:22.202 "write_zeroes": true, 00:28:22.202 "zcopy": false, 00:28:22.202 "get_zone_info": false, 00:28:22.202 "zone_management": false, 00:28:22.202 "zone_append": false, 00:28:22.202 "compare": false, 00:28:22.202 "compare_and_write": false, 00:28:22.202 "abort": false, 00:28:22.202 "seek_hole": true, 00:28:22.202 "seek_data": true, 00:28:22.202 "copy": false, 00:28:22.202 "nvme_iov_md": false 00:28:22.202 }, 00:28:22.202 "driver_specific": { 00:28:22.202 "lvol": { 00:28:22.202 "lvol_store_uuid": "93750eec-b0c0-4122-bf96-beaf62fbbd4c", 00:28:22.202 "base_bdev": "nvme0n1", 00:28:22.202 "thin_provision": true, 00:28:22.202 "num_allocated_clusters": 0, 00:28:22.202 "snapshot": false, 00:28:22.202 "clone": false, 00:28:22.202 "esnap_clone": false 00:28:22.202 } 00:28:22.202 } 00:28:22.202 } 00:28:22.202 ]' 00:28:22.202 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 04122485-7be4-496b-8089-db8d3f174ca6 --l2p_dram_limit 10' 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:22.465 06:20:47 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 04122485-7be4-496b-8089-db8d3f174ca6 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:22.465 [2024-10-01 06:20:48.057667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.057948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:22.465 [2024-10-01 06:20:48.057973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:22.465 [2024-10-01 06:20:48.057984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.058072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.058084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.465 [2024-10-01 06:20:48.058093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:22.465 [2024-10-01 06:20:48.058106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.058133] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:22.465 [2024-10-01 06:20:48.058432] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:22.465 [2024-10-01 06:20:48.058448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.058460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.465 [2024-10-01 06:20:48.058473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:28:22.465 [2024-10-01 06:20:48.058483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.058552] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d290051c-b53f-492a-9b45-c0ff8fd3e166 00:28:22.465 [2024-10-01 06:20:48.060005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.060045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:22.465 [2024-10-01 06:20:48.060058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:22.465 [2024-10-01 06:20:48.060072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.067507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.067540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.465 [2024-10-01 06:20:48.067554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.382 ms 00:28:22.465 [2024-10-01 06:20:48.067565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.067648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.067658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.465 [2024-10-01 06:20:48.067669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:22.465 [2024-10-01 06:20:48.067683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.067763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.067773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:22.465 [2024-10-01 06:20:48.067783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:22.465 [2024-10-01 06:20:48.067790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.067817] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:22.465 [2024-10-01 06:20:48.069698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.069816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.465 [2024-10-01 06:20:48.069834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:28:22.465 [2024-10-01 06:20:48.069854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.069893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.069903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:22.465 [2024-10-01 06:20:48.069915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:22.465 [2024-10-01 06:20:48.069927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.069945] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:22.465 [2024-10-01 06:20:48.070091] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:22.465 [2024-10-01 06:20:48.070103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:22.465 [2024-10-01 06:20:48.070117] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:22.465 [2024-10-01 06:20:48.070127] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:22.465 [2024-10-01 06:20:48.070138] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:22.465 [2024-10-01 06:20:48.070152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:22.465 [2024-10-01 06:20:48.070166] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:22.465 [2024-10-01 06:20:48.070173] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:22.465 [2024-10-01 06:20:48.070186] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:22.465 [2024-10-01 06:20:48.070196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.070206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:22.465 [2024-10-01 06:20:48.070213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:28:22.465 [2024-10-01 06:20:48.070222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.070306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.465 [2024-10-01 06:20:48.070323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:22.465 [2024-10-01 06:20:48.070333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:22.465 [2024-10-01 06:20:48.070342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.465 [2024-10-01 06:20:48.070438] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:22.465 [2024-10-01 06:20:48.070452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:22.465 [2024-10-01 06:20:48.070466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:22.465 [2024-10-01 06:20:48.070479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.465 [2024-10-01 06:20:48.070488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:22.465 [2024-10-01 06:20:48.070498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:22.465 [2024-10-01 06:20:48.070506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:22.465 [2024-10-01 06:20:48.070516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:22.465 [2024-10-01 06:20:48.070524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:22.465 [2024-10-01 06:20:48.070533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:22.465 [2024-10-01 06:20:48.070541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:22.465 [2024-10-01 06:20:48.070550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:22.465 [2024-10-01 06:20:48.070559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:22.465 [2024-10-01 06:20:48.070571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:22.465 [2024-10-01 06:20:48.070579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:22.465 [2024-10-01 06:20:48.070589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.465 [2024-10-01 06:20:48.070596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:22.465 [2024-10-01 06:20:48.070605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:22.465 [2024-10-01 06:20:48.070613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.465 [2024-10-01 06:20:48.070623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:22.466 [2024-10-01 06:20:48.070636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:22.466 [2024-10-01 06:20:48.070646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.466 [2024-10-01 06:20:48.070654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:22.466 [2024-10-01 06:20:48.070664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:22.466 [2024-10-01 06:20:48.070671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.466 [2024-10-01 06:20:48.070681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:22.466 [2024-10-01 06:20:48.070688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:22.466 [2024-10-01 06:20:48.070699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.466 [2024-10-01 06:20:48.070707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:22.466 [2024-10-01 06:20:48.070719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:22.466 [2024-10-01 06:20:48.070726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.466 [2024-10-01 06:20:48.070736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:22.466 [2024-10-01 06:20:48.070743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:22.466 [2024-10-01 06:20:48.070753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:22.466 [2024-10-01 06:20:48.070760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:22.466 [2024-10-01 06:20:48.070770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:22.466 [2024-10-01 06:20:48.070778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:22.466 [2024-10-01 06:20:48.070787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:22.466 [2024-10-01 06:20:48.070795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:22.466 [2024-10-01 06:20:48.070805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.466 [2024-10-01 06:20:48.070813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:22.466 [2024-10-01 06:20:48.070822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:22.466 [2024-10-01 06:20:48.070830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.466 [2024-10-01 06:20:48.070839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:22.466 [2024-10-01 06:20:48.070865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:22.466 [2024-10-01 06:20:48.070877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:22.466 [2024-10-01 06:20:48.070886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.466 [2024-10-01 06:20:48.070897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:22.466 [2024-10-01 06:20:48.070905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:22.466 [2024-10-01 06:20:48.070915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:22.466 [2024-10-01 06:20:48.070923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:22.466 [2024-10-01 06:20:48.070934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:22.466 [2024-10-01 06:20:48.070946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:22.466 [2024-10-01 06:20:48.070960] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:22.466 [2024-10-01 06:20:48.070971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.466 [2024-10-01 06:20:48.070983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:22.466 [2024-10-01 06:20:48.070991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:22.466 [2024-10-01 06:20:48.071002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:22.466 [2024-10-01 06:20:48.071010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:22.466 [2024-10-01 06:20:48.071020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:22.466 [2024-10-01 06:20:48.071029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:22.466 [2024-10-01 06:20:48.071040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:22.466 [2024-10-01 06:20:48.071048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:22.466 [2024-10-01 06:20:48.071059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:22.466 [2024-10-01 06:20:48.071067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:22.466 [2024-10-01 06:20:48.071078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:22.466 [2024-10-01 06:20:48.071086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:22.466 [2024-10-01 06:20:48.071096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:22.466 [2024-10-01 06:20:48.071104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:22.466 [2024-10-01 06:20:48.071114] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:22.466 [2024-10-01 06:20:48.071126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.466 [2024-10-01 06:20:48.071137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:22.466 [2024-10-01 06:20:48.071145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:22.466 [2024-10-01 06:20:48.071155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:22.466 [2024-10-01 06:20:48.071165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:22.466 [2024-10-01 06:20:48.071176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.466 [2024-10-01 06:20:48.071185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:22.466 [2024-10-01 06:20:48.071197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:28:22.466 [2024-10-01 06:20:48.071205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.466 [2024-10-01 06:20:48.071251] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:22.466 [2024-10-01 06:20:48.071264] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:25.773 [2024-10-01 06:20:50.941790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.942043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:25.773 [2024-10-01 06:20:50.942074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2870.517 ms 00:28:25.773 [2024-10-01 06:20:50.942084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:50.952991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.953148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:25.773 [2024-10-01 06:20:50.953171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.815 ms 00:28:25.773 [2024-10-01 06:20:50.953187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:50.953289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.953300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:25.773 [2024-10-01 06:20:50.953315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:25.773 [2024-10-01 06:20:50.953324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:50.963540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.963579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:25.773 [2024-10-01 06:20:50.963592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.162 ms 00:28:25.773 [2024-10-01 06:20:50.963601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:50.963635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.963643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:25.773 [2024-10-01 06:20:50.963656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:25.773 [2024-10-01 06:20:50.963663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:50.964124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.964140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:25.773 [2024-10-01 06:20:50.964151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:28:25.773 [2024-10-01 06:20:50.964159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:50.964288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.964298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:25.773 [2024-10-01 06:20:50.964309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:25.773 [2024-10-01 06:20:50.964320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:50.984639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.984730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:25.773 [2024-10-01 06:20:50.984768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.285 ms 00:28:25.773 [2024-10-01 06:20:50.984790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:50.994245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:25.773 [2024-10-01 06:20:50.997589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:50.997623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:25.773 [2024-10-01 06:20:50.997641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.441 ms 00:28:25.773 [2024-10-01 06:20:50.997652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.060937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.061017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:25.773 [2024-10-01 06:20:51.061032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.253 ms 00:28:25.773 [2024-10-01 06:20:51.061046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.061252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.061265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:25.773 [2024-10-01 06:20:51.061274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:28:25.773 [2024-10-01 06:20:51.061284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.065900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.065947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:25.773 [2024-10-01 06:20:51.065958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.596 ms 00:28:25.773 [2024-10-01 06:20:51.065968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.069511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.069549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:25.773 [2024-10-01 06:20:51.069560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.505 ms 00:28:25.773 [2024-10-01 06:20:51.069570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.069915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.069929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:25.773 [2024-10-01 06:20:51.069939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:28:25.773 [2024-10-01 06:20:51.069952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.097206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.097257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:25.773 [2024-10-01 06:20:51.097270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.234 ms 00:28:25.773 [2024-10-01 06:20:51.097280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.102388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.102428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:25.773 [2024-10-01 06:20:51.102438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.058 ms 00:28:25.773 [2024-10-01 06:20:51.102448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.106359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.106393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:25.773 [2024-10-01 06:20:51.106402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.877 ms 00:28:25.773 [2024-10-01 06:20:51.106412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.111202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.111239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:25.773 [2024-10-01 06:20:51.111250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.758 ms 00:28:25.773 [2024-10-01 06:20:51.111263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.111302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.111314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:25.773 [2024-10-01 06:20:51.111323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:25.773 [2024-10-01 06:20:51.111334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.773 [2024-10-01 06:20:51.111405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.773 [2024-10-01 06:20:51.111417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:25.773 [2024-10-01 06:20:51.111427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:25.774 [2024-10-01 06:20:51.111437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.774 [2024-10-01 06:20:51.112425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3054.315 ms, result 0 00:28:25.774 { 00:28:25.774 "name": "ftl0", 00:28:25.774 "uuid": "d290051c-b53f-492a-9b45-c0ff8fd3e166" 00:28:25.774 } 00:28:25.774 06:20:51 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:25.774 06:20:51 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:25.774 06:20:51 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:25.774 06:20:51 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:26.132 [2024-10-01 06:20:51.552627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.132 [2024-10-01 06:20:51.552680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:26.132 [2024-10-01 06:20:51.552697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:26.132 [2024-10-01 06:20:51.552706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.132 [2024-10-01 06:20:51.552733] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:26.132 [2024-10-01 06:20:51.553348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.132 [2024-10-01 06:20:51.553382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:26.132 [2024-10-01 06:20:51.553392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:28:26.132 [2024-10-01 06:20:51.553401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.132 [2024-10-01 06:20:51.553669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.132 [2024-10-01 06:20:51.553682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:26.132 [2024-10-01 06:20:51.553692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:28:26.132 [2024-10-01 06:20:51.553702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.132 [2024-10-01 06:20:51.556938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.132 [2024-10-01 06:20:51.556963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:26.132 [2024-10-01 06:20:51.556973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:28:26.132 [2024-10-01 06:20:51.556988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.132 [2024-10-01 06:20:51.563155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.132 [2024-10-01 06:20:51.563284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:26.132 [2024-10-01 06:20:51.563302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:28:26.132 [2024-10-01 06:20:51.563312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.133 [2024-10-01 06:20:51.566046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.133 [2024-10-01 06:20:51.566086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:26.133 [2024-10-01 06:20:51.566096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:28:26.133 [2024-10-01 06:20:51.566107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.133 [2024-10-01 06:20:51.571916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.133 [2024-10-01 06:20:51.571951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:26.133 [2024-10-01 06:20:51.571962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.775 ms 00:28:26.133 [2024-10-01 06:20:51.571972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.133 [2024-10-01 06:20:51.572096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.133 [2024-10-01 06:20:51.572111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:26.133 [2024-10-01 06:20:51.572120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:26.133 [2024-10-01 06:20:51.572130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.133 [2024-10-01 06:20:51.574569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.133 [2024-10-01 06:20:51.574601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:26.133 [2024-10-01 06:20:51.574611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.420 ms 00:28:26.133 [2024-10-01 06:20:51.574620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.133 [2024-10-01 06:20:51.576977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.133 [2024-10-01 06:20:51.577098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:26.133 [2024-10-01 06:20:51.577112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.326 ms 00:28:26.133 [2024-10-01 06:20:51.577121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.133 [2024-10-01 06:20:51.578957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.133 [2024-10-01 06:20:51.578989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:26.133 [2024-10-01 06:20:51.578998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.806 ms 00:28:26.133 [2024-10-01 06:20:51.579007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.133 [2024-10-01 06:20:51.580823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.133 [2024-10-01 06:20:51.580936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:26.133 [2024-10-01 06:20:51.580957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:28:26.133 [2024-10-01 06:20:51.580966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.133 [2024-10-01 06:20:51.580994] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:26.133 [2024-10-01 06:20:51.581011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:26.133 [2024-10-01 06:20:51.581561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:26.134 [2024-10-01 06:20:51.581909] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:26.134 [2024-10-01 06:20:51.581918] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d290051c-b53f-492a-9b45-c0ff8fd3e166 00:28:26.134 [2024-10-01 06:20:51.581928] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:26.134 [2024-10-01 06:20:51.581935] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:26.134 [2024-10-01 06:20:51.581944] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:26.134 [2024-10-01 06:20:51.581952] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:26.134 [2024-10-01 06:20:51.581965] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:26.134 [2024-10-01 06:20:51.581976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:26.134 [2024-10-01 06:20:51.581988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:26.134 [2024-10-01 06:20:51.581994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:26.134 [2024-10-01 06:20:51.582002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:26.134 [2024-10-01 06:20:51.582010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.134 [2024-10-01 06:20:51.582019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:26.134 [2024-10-01 06:20:51.582029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:28:26.134 [2024-10-01 06:20:51.582038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.583867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.134 [2024-10-01 06:20:51.583892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:26.134 [2024-10-01 06:20:51.583901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:28:26.134 [2024-10-01 06:20:51.583914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.584010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.134 [2024-10-01 06:20:51.584023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:26.134 [2024-10-01 06:20:51.584032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:26.134 [2024-10-01 06:20:51.584054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.590561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.590603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:26.134 [2024-10-01 06:20:51.590621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.590632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.590695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.590706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:26.134 [2024-10-01 06:20:51.590714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.590724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.590803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.590818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:26.134 [2024-10-01 06:20:51.590825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.590835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.590869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.590882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:26.134 [2024-10-01 06:20:51.590904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.590918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.602402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.602575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:26.134 [2024-10-01 06:20:51.602591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.602601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.612281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.612432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:26.134 [2024-10-01 06:20:51.612448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.612462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.612550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.612565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:26.134 [2024-10-01 06:20:51.612574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.612584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.612622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.612633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:26.134 [2024-10-01 06:20:51.612641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.612653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.612731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.612743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:26.134 [2024-10-01 06:20:51.612750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.612760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.612805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.612817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:26.134 [2024-10-01 06:20:51.612825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.612837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.134 [2024-10-01 06:20:51.613066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.134 [2024-10-01 06:20:51.613115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:26.134 [2024-10-01 06:20:51.613136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.134 [2024-10-01 06:20:51.613157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.135 [2024-10-01 06:20:51.613218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:26.135 [2024-10-01 06:20:51.613244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:26.135 [2024-10-01 06:20:51.613264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:26.135 [2024-10-01 06:20:51.613288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.135 [2024-10-01 06:20:51.613439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.777 ms, result 0 00:28:26.135 true 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92926 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92926 ']' 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92926 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92926 00:28:26.135 killing process with pid 92926 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92926' 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 92926 00:28:26.135 06:20:51 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 92926 00:28:41.041 06:21:04 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:43.581 262144+0 records in 00:28:43.581 262144+0 records out 00:28:43.581 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.40637 s, 244 MB/s 00:28:43.581 06:21:08 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:46.128 06:21:11 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:46.128 [2024-10-01 06:21:11.394435] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:28:46.128 [2024-10-01 06:21:11.395055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93142 ] 00:28:46.128 [2024-10-01 06:21:11.537520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.128 [2024-10-01 06:21:11.619562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.391 [2024-10-01 06:21:11.780833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:46.391 [2024-10-01 06:21:11.781003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:46.391 [2024-10-01 06:21:11.947581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.948059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:46.391 [2024-10-01 06:21:11.948098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:46.391 [2024-10-01 06:21:11.948110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.948232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.948246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:46.391 [2024-10-01 06:21:11.948256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:46.391 [2024-10-01 06:21:11.948275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.948309] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:46.391 [2024-10-01 06:21:11.948720] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:46.391 [2024-10-01 06:21:11.948742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.948758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:46.391 [2024-10-01 06:21:11.948771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:28:46.391 [2024-10-01 06:21:11.948781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.951328] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:46.391 [2024-10-01 06:21:11.957581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.957661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:46.391 [2024-10-01 06:21:11.957678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.253 ms 00:28:46.391 [2024-10-01 06:21:11.957689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.957822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.957838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:46.391 [2024-10-01 06:21:11.957885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:46.391 [2024-10-01 06:21:11.957902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.970515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.970607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:46.391 [2024-10-01 06:21:11.970625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.505 ms 00:28:46.391 [2024-10-01 06:21:11.970635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.970803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.970815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:46.391 [2024-10-01 06:21:11.970825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:28:46.391 [2024-10-01 06:21:11.970835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.971020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.971035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:46.391 [2024-10-01 06:21:11.971045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:46.391 [2024-10-01 06:21:11.971054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.971088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:46.391 [2024-10-01 06:21:11.974021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.974077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:46.391 [2024-10-01 06:21:11.974091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:28:46.391 [2024-10-01 06:21:11.974100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.974166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.974177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:46.391 [2024-10-01 06:21:11.974196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:46.391 [2024-10-01 06:21:11.974205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.974239] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:46.391 [2024-10-01 06:21:11.974276] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:46.391 [2024-10-01 06:21:11.974333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:46.391 [2024-10-01 06:21:11.974353] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:46.391 [2024-10-01 06:21:11.974471] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:46.391 [2024-10-01 06:21:11.974483] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:46.391 [2024-10-01 06:21:11.974496] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:46.391 [2024-10-01 06:21:11.974514] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:46.391 [2024-10-01 06:21:11.974527] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:46.391 [2024-10-01 06:21:11.974538] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:46.391 [2024-10-01 06:21:11.974547] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:46.391 [2024-10-01 06:21:11.974559] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:46.391 [2024-10-01 06:21:11.974574] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:46.391 [2024-10-01 06:21:11.974583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.974592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:46.391 [2024-10-01 06:21:11.974600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:28:46.391 [2024-10-01 06:21:11.974608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.974701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.391 [2024-10-01 06:21:11.974712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:46.391 [2024-10-01 06:21:11.974724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:46.391 [2024-10-01 06:21:11.974733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.391 [2024-10-01 06:21:11.974889] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:46.391 [2024-10-01 06:21:11.974904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:46.391 [2024-10-01 06:21:11.974920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:46.391 [2024-10-01 06:21:11.974931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.391 [2024-10-01 06:21:11.974941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:46.391 [2024-10-01 06:21:11.974950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:46.391 [2024-10-01 06:21:11.974960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:46.391 [2024-10-01 06:21:11.974971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:46.391 [2024-10-01 06:21:11.974981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:46.391 [2024-10-01 06:21:11.974989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:46.391 [2024-10-01 06:21:11.974999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:46.391 [2024-10-01 06:21:11.975019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:46.391 [2024-10-01 06:21:11.975027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:46.391 [2024-10-01 06:21:11.975036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:46.391 [2024-10-01 06:21:11.975045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:46.391 [2024-10-01 06:21:11.975054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.391 [2024-10-01 06:21:11.975061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:46.391 [2024-10-01 06:21:11.975068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:46.391 [2024-10-01 06:21:11.975075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.391 [2024-10-01 06:21:11.975082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:46.392 [2024-10-01 06:21:11.975091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:46.392 [2024-10-01 06:21:11.975098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.392 [2024-10-01 06:21:11.975105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:46.392 [2024-10-01 06:21:11.975112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:46.392 [2024-10-01 06:21:11.975124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.392 [2024-10-01 06:21:11.975133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:46.392 [2024-10-01 06:21:11.975141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:46.392 [2024-10-01 06:21:11.975157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.392 [2024-10-01 06:21:11.975165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:46.392 [2024-10-01 06:21:11.975173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:46.392 [2024-10-01 06:21:11.975181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.392 [2024-10-01 06:21:11.975188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:46.392 [2024-10-01 06:21:11.975196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:46.392 [2024-10-01 06:21:11.975203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:46.392 [2024-10-01 06:21:11.975210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:46.392 [2024-10-01 06:21:11.975218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:46.392 [2024-10-01 06:21:11.975225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:46.392 [2024-10-01 06:21:11.975232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:46.392 [2024-10-01 06:21:11.975239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:46.392 [2024-10-01 06:21:11.975246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.392 [2024-10-01 06:21:11.975253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:46.392 [2024-10-01 06:21:11.975260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:46.392 [2024-10-01 06:21:11.975267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.392 [2024-10-01 06:21:11.975277] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:46.392 [2024-10-01 06:21:11.975289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:46.392 [2024-10-01 06:21:11.975298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:46.392 [2024-10-01 06:21:11.975308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.392 [2024-10-01 06:21:11.975321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:46.392 [2024-10-01 06:21:11.975329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:46.392 [2024-10-01 06:21:11.975336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:46.392 [2024-10-01 06:21:11.975344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:46.392 [2024-10-01 06:21:11.975352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:46.392 [2024-10-01 06:21:11.975360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:46.392 [2024-10-01 06:21:11.975371] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:46.392 [2024-10-01 06:21:11.975384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.392 [2024-10-01 06:21:11.975394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:46.392 [2024-10-01 06:21:11.975405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:46.392 [2024-10-01 06:21:11.975414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:46.392 [2024-10-01 06:21:11.975422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:46.392 [2024-10-01 06:21:11.975432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:46.392 [2024-10-01 06:21:11.975440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:46.392 [2024-10-01 06:21:11.975447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:46.392 [2024-10-01 06:21:11.975455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:46.392 [2024-10-01 06:21:11.975463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:46.392 [2024-10-01 06:21:11.975480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:46.392 [2024-10-01 06:21:11.975489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:46.392 [2024-10-01 06:21:11.975497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:46.392 [2024-10-01 06:21:11.975505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:46.392 [2024-10-01 06:21:11.975513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:46.392 [2024-10-01 06:21:11.975521] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:46.392 [2024-10-01 06:21:11.975534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.392 [2024-10-01 06:21:11.975543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:46.392 [2024-10-01 06:21:11.975551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:46.392 [2024-10-01 06:21:11.975559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:46.392 [2024-10-01 06:21:11.975567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:46.392 [2024-10-01 06:21:11.975579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.392 [2024-10-01 06:21:11.975588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:46.392 [2024-10-01 06:21:11.975596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:28:46.392 [2024-10-01 06:21:11.975605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.008715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.009209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:46.655 [2024-10-01 06:21:12.009244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.043 ms 00:28:46.655 [2024-10-01 06:21:12.009257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.009435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.009449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:46.655 [2024-10-01 06:21:12.009462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:28:46.655 [2024-10-01 06:21:12.009473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.026743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.026839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:46.655 [2024-10-01 06:21:12.026887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.124 ms 00:28:46.655 [2024-10-01 06:21:12.026900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.026999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.027012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:46.655 [2024-10-01 06:21:12.027024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:46.655 [2024-10-01 06:21:12.027034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.027893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.028126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:46.655 [2024-10-01 06:21:12.028147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:28:46.655 [2024-10-01 06:21:12.028158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.028373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.028386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:46.655 [2024-10-01 06:21:12.028397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:28:46.655 [2024-10-01 06:21:12.028407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.038710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.038789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:46.655 [2024-10-01 06:21:12.038815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.269 ms 00:28:46.655 [2024-10-01 06:21:12.038827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.044700] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:46.655 [2024-10-01 06:21:12.044781] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:46.655 [2024-10-01 06:21:12.044799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.044810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:46.655 [2024-10-01 06:21:12.044823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.731 ms 00:28:46.655 [2024-10-01 06:21:12.044833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.062381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.062784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:46.655 [2024-10-01 06:21:12.062831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.442 ms 00:28:46.655 [2024-10-01 06:21:12.062842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.068340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.068426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:46.655 [2024-10-01 06:21:12.068442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.366 ms 00:28:46.655 [2024-10-01 06:21:12.068451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.071898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.071970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:46.655 [2024-10-01 06:21:12.071986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:28:46.655 [2024-10-01 06:21:12.071995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.072483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.072514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:46.655 [2024-10-01 06:21:12.072528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:28:46.655 [2024-10-01 06:21:12.072538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.108031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.108172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:46.655 [2024-10-01 06:21:12.108201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.464 ms 00:28:46.655 [2024-10-01 06:21:12.108216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.119108] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:46.655 [2024-10-01 06:21:12.125022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.125116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:46.655 [2024-10-01 06:21:12.125135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.705 ms 00:28:46.655 [2024-10-01 06:21:12.125170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.125359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.125373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:46.655 [2024-10-01 06:21:12.125384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:46.655 [2024-10-01 06:21:12.125394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.655 [2024-10-01 06:21:12.125502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.655 [2024-10-01 06:21:12.125515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:46.655 [2024-10-01 06:21:12.125524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:46.656 [2024-10-01 06:21:12.125533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.656 [2024-10-01 06:21:12.125574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.656 [2024-10-01 06:21:12.125596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:46.656 [2024-10-01 06:21:12.125606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:46.656 [2024-10-01 06:21:12.125615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.656 [2024-10-01 06:21:12.125669] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:46.656 [2024-10-01 06:21:12.125682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.656 [2024-10-01 06:21:12.125691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:46.656 [2024-10-01 06:21:12.125700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:46.656 [2024-10-01 06:21:12.125710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.656 [2024-10-01 06:21:12.134178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.656 [2024-10-01 06:21:12.134278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:46.656 [2024-10-01 06:21:12.134296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.440 ms 00:28:46.656 [2024-10-01 06:21:12.134307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.656 [2024-10-01 06:21:12.134429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.656 [2024-10-01 06:21:12.134441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:46.656 [2024-10-01 06:21:12.134455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:46.656 [2024-10-01 06:21:12.134465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.656 [2024-10-01 06:21:12.136153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 187.914 ms, result 0 00:30:26.242  Copying: 9772/1048576 [kB] (9772 kBps) Copying: 18388/1048576 [kB] (8616 kBps) Copying: 27/1024 [MB] (10 MBps) Copying: 38344/1048576 [kB] (9684 kBps) Copying: 47212/1048576 [kB] (8868 kBps) Copying: 56384/1048576 [kB] (9172 kBps) Copying: 65184/1048576 [kB] (8800 kBps) Copying: 73660/1048576 [kB] (8476 kBps) Copying: 82424/1048576 [kB] (8764 kBps) Copying: 91/1024 [MB] (10 MBps) Copying: 102512/1048576 [kB] (9032 kBps) Copying: 111616/1048576 [kB] (9104 kBps) Copying: 120352/1048576 [kB] (8736 kBps) Copying: 129276/1048576 [kB] (8924 kBps) Copying: 137976/1048576 [kB] (8700 kBps) Copying: 148008/1048576 [kB] (10032 kBps) Copying: 156868/1048576 [kB] (8860 kBps) Copying: 165460/1048576 [kB] (8592 kBps) Copying: 174008/1048576 [kB] (8548 kBps) Copying: 182872/1048576 [kB] (8864 kBps) Copying: 188/1024 [MB] (10 MBps) Copying: 201824/1048576 [kB] (8468 kBps) Copying: 210092/1048576 [kB] (8268 kBps) Copying: 218808/1048576 [kB] (8716 kBps) Copying: 227896/1048576 [kB] (9088 kBps) Copying: 237048/1048576 [kB] (9152 kBps) Copying: 245872/1048576 [kB] (8824 kBps) Copying: 254560/1048576 [kB] (8688 kBps) Copying: 263552/1048576 [kB] (8992 kBps) Copying: 267/1024 [MB] (10 MBps) Copying: 279/1024 [MB] (11 MBps) Copying: 295632/1048576 [kB] (9820 kBps) Copying: 305680/1048576 [kB] (10048 kBps) Copying: 315228/1048576 [kB] (9548 kBps) Copying: 324208/1048576 [kB] (8980 kBps) Copying: 327/1024 [MB] (10 MBps) Copying: 344904/1048576 [kB] (9916 kBps) Copying: 350/1024 [MB] (13 MBps) Copying: 364/1024 [MB] (14 MBps) Copying: 377/1024 [MB] (12 MBps) Copying: 388/1024 [MB] (11 MBps) Copying: 402/1024 [MB] (14 MBps) Copying: 413/1024 [MB] (11 MBps) Copying: 429/1024 [MB] (15 MBps) Copying: 441/1024 [MB] (11 MBps) Copying: 454/1024 [MB] (13 MBps) Copying: 471/1024 [MB] (16 MBps) Copying: 485/1024 [MB] (14 MBps) Copying: 495/1024 [MB] (10 MBps) Copying: 506/1024 [MB] (10 MBps) Copying: 518/1024 [MB] (12 MBps) Copying: 528/1024 [MB] (10 MBps) Copying: 541/1024 [MB] (12 MBps) Copying: 552/1024 [MB] (11 MBps) Copying: 565/1024 [MB] (13 MBps) Copying: 575/1024 [MB] (10 MBps) Copying: 595/1024 [MB] (19 MBps) Copying: 609/1024 [MB] (14 MBps) Copying: 621/1024 [MB] (11 MBps) Copying: 632/1024 [MB] (10 MBps) Copying: 644/1024 [MB] (12 MBps) Copying: 657/1024 [MB] (13 MBps) Copying: 683520/1048576 [kB] (9820 kBps) Copying: 677/1024 [MB] (10 MBps) Copying: 687/1024 [MB] (10 MBps) Copying: 698/1024 [MB] (10 MBps) Copying: 708/1024 [MB] (10 MBps) Copying: 735488/1048576 [kB] (10232 kBps) Copying: 745000/1048576 [kB] (9512 kBps) Copying: 754232/1048576 [kB] (9232 kBps) Copying: 763032/1048576 [kB] (8800 kBps) Copying: 759/1024 [MB] (14 MBps) Copying: 786304/1048576 [kB] (8812 kBps) Copying: 778/1024 [MB] (10 MBps) Copying: 807176/1048576 [kB] (9864 kBps) Copying: 817248/1048576 [kB] (10072 kBps) Copying: 827368/1048576 [kB] (10120 kBps) Copying: 818/1024 [MB] (10 MBps) Copying: 847896/1048576 [kB] (10072 kBps) Copying: 838/1024 [MB] (10 MBps) Copying: 848/1024 [MB] (10 MBps) Copying: 858/1024 [MB] (10 MBps) Copying: 868/1024 [MB] (10 MBps) Copying: 879/1024 [MB] (10 MBps) Copying: 910448/1048576 [kB] (9896 kBps) Copying: 919892/1048576 [kB] (9444 kBps) Copying: 929944/1048576 [kB] (10052 kBps) Copying: 939456/1048576 [kB] (9512 kBps) Copying: 948768/1048576 [kB] (9312 kBps) Copying: 958792/1048576 [kB] (10024 kBps) Copying: 968200/1048576 [kB] (9408 kBps) Copying: 977924/1048576 [kB] (9724 kBps) Copying: 987876/1048576 [kB] (9952 kBps) Copying: 997264/1048576 [kB] (9388 kBps) Copying: 1006736/1048576 [kB] (9472 kBps) Copying: 993/1024 [MB] (10 MBps) Copying: 1026680/1048576 [kB] (8916 kBps) Copying: 1035248/1048576 [kB] (8568 kBps) Copying: 1044224/1048576 [kB] (8976 kBps) Copying: 1024/1024 [MB] (average 10 MBps)[2024-10-01 06:22:51.614797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.243 [2024-10-01 06:22:51.614922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:26.243 [2024-10-01 06:22:51.614942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:26.243 [2024-10-01 06:22:51.614952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.243 [2024-10-01 06:22:51.614983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:26.243 [2024-10-01 06:22:51.616010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.243 [2024-10-01 06:22:51.616048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:26.243 [2024-10-01 06:22:51.616059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:30:26.243 [2024-10-01 06:22:51.616068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.243 [2024-10-01 06:22:51.618372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.243 [2024-10-01 06:22:51.618425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:26.243 [2024-10-01 06:22:51.618437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:30:26.243 [2024-10-01 06:22:51.618445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.243 [2024-10-01 06:22:51.618494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.243 [2024-10-01 06:22:51.618510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:26.243 [2024-10-01 06:22:51.618520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:26.243 [2024-10-01 06:22:51.618533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.243 [2024-10-01 06:22:51.618604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.243 [2024-10-01 06:22:51.618615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:26.243 [2024-10-01 06:22:51.618627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:26.243 [2024-10-01 06:22:51.618636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.243 [2024-10-01 06:22:51.618651] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:26.243 [2024-10-01 06:22:51.618666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.618995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:26.243 [2024-10-01 06:22:51.619191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:26.244 [2024-10-01 06:22:51.619491] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:26.244 [2024-10-01 06:22:51.619500] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d290051c-b53f-492a-9b45-c0ff8fd3e166 00:30:26.244 [2024-10-01 06:22:51.619514] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:26.244 [2024-10-01 06:22:51.619522] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:26.244 [2024-10-01 06:22:51.619530] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:26.244 [2024-10-01 06:22:51.619542] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:26.244 [2024-10-01 06:22:51.619550] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:26.244 [2024-10-01 06:22:51.619558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:26.244 [2024-10-01 06:22:51.619567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:26.244 [2024-10-01 06:22:51.619576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:26.244 [2024-10-01 06:22:51.619582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:26.244 [2024-10-01 06:22:51.619589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.244 [2024-10-01 06:22:51.619597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:26.244 [2024-10-01 06:22:51.619606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:30:26.244 [2024-10-01 06:22:51.619614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.622814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.244 [2024-10-01 06:22:51.622878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:26.244 [2024-10-01 06:22:51.622891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:30:26.244 [2024-10-01 06:22:51.622901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.623099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.244 [2024-10-01 06:22:51.623112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:26.244 [2024-10-01 06:22:51.623124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:30:26.244 [2024-10-01 06:22:51.623137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.632521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.632588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:26.244 [2024-10-01 06:22:51.632600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.632610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.632700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.632709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:26.244 [2024-10-01 06:22:51.632719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.632735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.632795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.632807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:26.244 [2024-10-01 06:22:51.632815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.632826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.632868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.632879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:26.244 [2024-10-01 06:22:51.632888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.632897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.653129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.653223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:26.244 [2024-10-01 06:22:51.653238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.653249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.668958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.669047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:26.244 [2024-10-01 06:22:51.669084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.669095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.669231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.669244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:26.244 [2024-10-01 06:22:51.669253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.669262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.669305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.669316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:26.244 [2024-10-01 06:22:51.669326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.669334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.669405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.244 [2024-10-01 06:22:51.669416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:26.244 [2024-10-01 06:22:51.669425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.244 [2024-10-01 06:22:51.669434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.244 [2024-10-01 06:22:51.669462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.245 [2024-10-01 06:22:51.669473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:26.245 [2024-10-01 06:22:51.669481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.245 [2024-10-01 06:22:51.669489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.245 [2024-10-01 06:22:51.669540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.245 [2024-10-01 06:22:51.669554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:26.245 [2024-10-01 06:22:51.669563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.245 [2024-10-01 06:22:51.669571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.245 [2024-10-01 06:22:51.669629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.245 [2024-10-01 06:22:51.669641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:26.245 [2024-10-01 06:22:51.669651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.245 [2024-10-01 06:22:51.669660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.245 [2024-10-01 06:22:51.669820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.980 ms, result 0 00:30:27.187 00:30:27.187 00:30:27.187 06:22:52 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:27.187 [2024-10-01 06:22:52.705077] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:30:27.187 [2024-10-01 06:22:52.705267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94145 ] 00:30:27.450 [2024-10-01 06:22:52.844935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.450 [2024-10-01 06:22:52.920781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.713 [2024-10-01 06:22:53.071666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:27.713 [2024-10-01 06:22:53.071785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:27.713 [2024-10-01 06:22:53.236878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.236973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:27.713 [2024-10-01 06:22:53.236994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:27.713 [2024-10-01 06:22:53.237004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.237091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.237104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:27.713 [2024-10-01 06:22:53.237115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:27.713 [2024-10-01 06:22:53.237131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.237154] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:27.713 [2024-10-01 06:22:53.237457] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:27.713 [2024-10-01 06:22:53.237476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.237485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:27.713 [2024-10-01 06:22:53.237505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:30:27.713 [2024-10-01 06:22:53.237514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.237898] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:27.713 [2024-10-01 06:22:53.237930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.237941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:27.713 [2024-10-01 06:22:53.237951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:27.713 [2024-10-01 06:22:53.237960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.238027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.238042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:27.713 [2024-10-01 06:22:53.238054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:27.713 [2024-10-01 06:22:53.238063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.238366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.238385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:27.713 [2024-10-01 06:22:53.238395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:30:27.713 [2024-10-01 06:22:53.238407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.238498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.238512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:27.713 [2024-10-01 06:22:53.238521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:27.713 [2024-10-01 06:22:53.238529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.238555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.238563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:27.713 [2024-10-01 06:22:53.238573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:27.713 [2024-10-01 06:22:53.238581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.238602] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:27.713 [2024-10-01 06:22:53.241467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.241514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:27.713 [2024-10-01 06:22:53.241530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:30:27.713 [2024-10-01 06:22:53.241540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.241581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.241590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:27.713 [2024-10-01 06:22:53.241601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:27.713 [2024-10-01 06:22:53.241609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.241663] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:27.713 [2024-10-01 06:22:53.241690] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:27.713 [2024-10-01 06:22:53.241743] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:27.713 [2024-10-01 06:22:53.241762] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:27.713 [2024-10-01 06:22:53.241890] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:27.713 [2024-10-01 06:22:53.241903] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:27.713 [2024-10-01 06:22:53.241920] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:27.713 [2024-10-01 06:22:53.241932] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:27.713 [2024-10-01 06:22:53.241947] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:27.713 [2024-10-01 06:22:53.241959] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:27.713 [2024-10-01 06:22:53.241970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:27.713 [2024-10-01 06:22:53.241978] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:27.713 [2024-10-01 06:22:53.241993] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:27.713 [2024-10-01 06:22:53.242001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.242009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:27.713 [2024-10-01 06:22:53.242025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:30:27.713 [2024-10-01 06:22:53.242032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.242115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.713 [2024-10-01 06:22:53.242124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:27.713 [2024-10-01 06:22:53.242132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:27.713 [2024-10-01 06:22:53.242143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.713 [2024-10-01 06:22:53.242252] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:27.714 [2024-10-01 06:22:53.242270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:27.714 [2024-10-01 06:22:53.242280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:27.714 [2024-10-01 06:22:53.242307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:27.714 [2024-10-01 06:22:53.242331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:27.714 [2024-10-01 06:22:53.242346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:27.714 [2024-10-01 06:22:53.242356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:27.714 [2024-10-01 06:22:53.242364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:27.714 [2024-10-01 06:22:53.242372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:27.714 [2024-10-01 06:22:53.242381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:27.714 [2024-10-01 06:22:53.242388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:27.714 [2024-10-01 06:22:53.242403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:27.714 [2024-10-01 06:22:53.242427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:27.714 [2024-10-01 06:22:53.242449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:27.714 [2024-10-01 06:22:53.242468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:27.714 [2024-10-01 06:22:53.242487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:27.714 [2024-10-01 06:22:53.242508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:27.714 [2024-10-01 06:22:53.242522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:27.714 [2024-10-01 06:22:53.242535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:27.714 [2024-10-01 06:22:53.242542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:27.714 [2024-10-01 06:22:53.242549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:27.714 [2024-10-01 06:22:53.242555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:27.714 [2024-10-01 06:22:53.242562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:27.714 [2024-10-01 06:22:53.242576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:27.714 [2024-10-01 06:22:53.242583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242592] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:27.714 [2024-10-01 06:22:53.242602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:27.714 [2024-10-01 06:22:53.242611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:27.714 [2024-10-01 06:22:53.242627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:27.714 [2024-10-01 06:22:53.242634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:27.714 [2024-10-01 06:22:53.242642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:27.714 [2024-10-01 06:22:53.242649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:27.714 [2024-10-01 06:22:53.242659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:27.714 [2024-10-01 06:22:53.242666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:27.714 [2024-10-01 06:22:53.242675] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:27.714 [2024-10-01 06:22:53.242688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:27.714 [2024-10-01 06:22:53.242697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:27.714 [2024-10-01 06:22:53.242704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:27.714 [2024-10-01 06:22:53.242712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:27.714 [2024-10-01 06:22:53.242719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:27.714 [2024-10-01 06:22:53.242726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:27.714 [2024-10-01 06:22:53.242734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:27.714 [2024-10-01 06:22:53.242741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:27.714 [2024-10-01 06:22:53.242749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:27.714 [2024-10-01 06:22:53.242757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:27.714 [2024-10-01 06:22:53.242764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:27.714 [2024-10-01 06:22:53.242772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:27.714 [2024-10-01 06:22:53.242785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:27.714 [2024-10-01 06:22:53.242795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:27.714 [2024-10-01 06:22:53.242803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:27.714 [2024-10-01 06:22:53.242809] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:27.714 [2024-10-01 06:22:53.242819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:27.714 [2024-10-01 06:22:53.242829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:27.714 [2024-10-01 06:22:53.242836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:27.714 [2024-10-01 06:22:53.243290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:27.714 [2024-10-01 06:22:53.243359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:27.714 [2024-10-01 06:22:53.243400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.714 [2024-10-01 06:22:53.243423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:27.714 [2024-10-01 06:22:53.243446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:30:27.714 [2024-10-01 06:22:53.243529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.714 [2024-10-01 06:22:53.273556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.714 [2024-10-01 06:22:53.273988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:27.714 [2024-10-01 06:22:53.274193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.933 ms 00:30:27.714 [2024-10-01 06:22:53.274902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.714 [2024-10-01 06:22:53.275249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.714 [2024-10-01 06:22:53.275298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:27.714 [2024-10-01 06:22:53.275329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:30:27.714 [2024-10-01 06:22:53.275352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.714 [2024-10-01 06:22:53.292690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.714 [2024-10-01 06:22:53.292902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.714 [2024-10-01 06:22:53.292928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.153 ms 00:30:27.714 [2024-10-01 06:22:53.292939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.714 [2024-10-01 06:22:53.292993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.714 [2024-10-01 06:22:53.293003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:27.714 [2024-10-01 06:22:53.293014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:27.714 [2024-10-01 06:22:53.293022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.714 [2024-10-01 06:22:53.293166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.714 [2024-10-01 06:22:53.293179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:27.714 [2024-10-01 06:22:53.293189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:27.714 [2024-10-01 06:22:53.293206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.714 [2024-10-01 06:22:53.293351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.714 [2024-10-01 06:22:53.293362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:27.714 [2024-10-01 06:22:53.293376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:30:27.715 [2024-10-01 06:22:53.293389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.715 [2024-10-01 06:22:53.302717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.715 [2024-10-01 06:22:53.302764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:27.715 [2024-10-01 06:22:53.302776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.305 ms 00:30:27.715 [2024-10-01 06:22:53.302785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.715 [2024-10-01 06:22:53.302958] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:27.715 [2024-10-01 06:22:53.302973] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:27.715 [2024-10-01 06:22:53.302984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.715 [2024-10-01 06:22:53.302994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:27.715 [2024-10-01 06:22:53.303004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:27.715 [2024-10-01 06:22:53.303012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.715 [2024-10-01 06:22:53.315339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.715 [2024-10-01 06:22:53.315386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:27.715 [2024-10-01 06:22:53.315399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.301 ms 00:30:27.715 [2024-10-01 06:22:53.315409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.715 [2024-10-01 06:22:53.315576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.715 [2024-10-01 06:22:53.315588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:27.715 [2024-10-01 06:22:53.315598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:30:27.715 [2024-10-01 06:22:53.315606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.715 [2024-10-01 06:22:53.315655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.715 [2024-10-01 06:22:53.315670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:27.715 [2024-10-01 06:22:53.315679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:27.715 [2024-10-01 06:22:53.315691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.715 [2024-10-01 06:22:53.316097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.715 [2024-10-01 06:22:53.316119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:27.715 [2024-10-01 06:22:53.316129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:30:27.715 [2024-10-01 06:22:53.316140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.715 [2024-10-01 06:22:53.316158] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:27.715 [2024-10-01 06:22:53.316170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.715 [2024-10-01 06:22:53.316178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:27.715 [2024-10-01 06:22:53.316187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:27.715 [2024-10-01 06:22:53.316198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.715 [2024-10-01 06:22:53.327097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:27.715 [2024-10-01 06:22:53.327390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.715 [2024-10-01 06:22:53.327409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:27.715 [2024-10-01 06:22:53.327422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.173 ms 00:30:27.715 [2024-10-01 06:22:53.327432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.977 [2024-10-01 06:22:53.330050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.977 [2024-10-01 06:22:53.330089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:27.977 [2024-10-01 06:22:53.330102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:30:27.977 [2024-10-01 06:22:53.330111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.977 [2024-10-01 06:22:53.330231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.977 [2024-10-01 06:22:53.330242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:27.977 [2024-10-01 06:22:53.330252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:27.977 [2024-10-01 06:22:53.330261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.977 [2024-10-01 06:22:53.330340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.977 [2024-10-01 06:22:53.330351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:27.977 [2024-10-01 06:22:53.330360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:27.977 [2024-10-01 06:22:53.330372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.977 [2024-10-01 06:22:53.330415] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:27.977 [2024-10-01 06:22:53.330429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.977 [2024-10-01 06:22:53.330439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:27.977 [2024-10-01 06:22:53.330449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:27.977 [2024-10-01 06:22:53.330456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.977 [2024-10-01 06:22:53.337638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.977 [2024-10-01 06:22:53.337814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:27.977 [2024-10-01 06:22:53.337900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.160 ms 00:30:27.977 [2024-10-01 06:22:53.338402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.977 [2024-10-01 06:22:53.338985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.977 [2024-10-01 06:22:53.339039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:27.977 [2024-10-01 06:22:53.339059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:27.977 [2024-10-01 06:22:53.339072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.977 [2024-10-01 06:22:53.340558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.137 ms, result 0 00:31:12.588  Copying: 9620/1048576 [kB] (9620 kBps) Copying: 19/1024 [MB] (10 MBps) Copying: 29268/1048576 [kB] (9232 kBps) Copying: 38720/1048576 [kB] (9452 kBps) Copying: 47728/1048576 [kB] (9008 kBps) Copying: 57164/1048576 [kB] (9436 kBps) Copying: 66088/1048576 [kB] (8924 kBps) Copying: 75800/1048576 [kB] (9712 kBps) Copying: 84924/1048576 [kB] (9124 kBps) Copying: 93892/1048576 [kB] (8968 kBps) Copying: 103160/1048576 [kB] (9268 kBps) Copying: 117/1024 [MB] (16 MBps) Copying: 148/1024 [MB] (30 MBps) Copying: 181/1024 [MB] (33 MBps) Copying: 220/1024 [MB] (38 MBps) Copying: 257/1024 [MB] (37 MBps) Copying: 296/1024 [MB] (38 MBps) Copying: 335/1024 [MB] (38 MBps) Copying: 370/1024 [MB] (35 MBps) Copying: 409/1024 [MB] (38 MBps) Copying: 450/1024 [MB] (41 MBps) Copying: 498/1024 [MB] (47 MBps) Copying: 545/1024 [MB] (46 MBps) Copying: 586/1024 [MB] (41 MBps) Copying: 618/1024 [MB] (31 MBps) Copying: 665/1024 [MB] (47 MBps) Copying: 711/1024 [MB] (46 MBps) Copying: 760/1024 [MB] (48 MBps) Copying: 794/1024 [MB] (34 MBps) Copying: 812/1024 [MB] (17 MBps) Copying: 829/1024 [MB] (16 MBps) Copying: 842/1024 [MB] (13 MBps) Copying: 860/1024 [MB] (18 MBps) Copying: 879/1024 [MB] (18 MBps) Copying: 898/1024 [MB] (19 MBps) Copying: 912/1024 [MB] (13 MBps) Copying: 923/1024 [MB] (10 MBps) Copying: 934/1024 [MB] (11 MBps) Copying: 955/1024 [MB] (20 MBps) Copying: 973/1024 [MB] (18 MBps) Copying: 993/1024 [MB] (19 MBps) Copying: 1026300/1048576 [kB] (9036 kBps) Copying: 1035484/1048576 [kB] (9184 kBps) Copying: 1045528/1048576 [kB] (10044 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-10-01 06:23:37.964169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.588 [2024-10-01 06:23:37.964297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:12.588 [2024-10-01 06:23:37.964325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:12.588 [2024-10-01 06:23:37.964342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.588 [2024-10-01 06:23:37.964398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:12.588 [2024-10-01 06:23:37.965494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.588 [2024-10-01 06:23:37.965545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:12.588 [2024-10-01 06:23:37.965565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:31:12.588 [2024-10-01 06:23:37.965581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.588 [2024-10-01 06:23:37.966320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.588 [2024-10-01 06:23:37.966358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:12.588 [2024-10-01 06:23:37.966375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:31:12.588 [2024-10-01 06:23:37.966398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.588 [2024-10-01 06:23:37.966465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.588 [2024-10-01 06:23:37.966486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:12.588 [2024-10-01 06:23:37.966505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:12.588 [2024-10-01 06:23:37.966519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.588 [2024-10-01 06:23:37.966618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.588 [2024-10-01 06:23:37.966635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:12.588 [2024-10-01 06:23:37.966651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:12.588 [2024-10-01 06:23:37.966667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.588 [2024-10-01 06:23:37.966693] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:12.588 [2024-10-01 06:23:37.966715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.966990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:12.589 [2024-10-01 06:23:37.967888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.967902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.967916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.967930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.967944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.967957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.967972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.967985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:12.590 [2024-10-01 06:23:37.968217] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:12.590 [2024-10-01 06:23:37.968231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d290051c-b53f-492a-9b45-c0ff8fd3e166 00:31:12.590 [2024-10-01 06:23:37.968250] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:12.590 [2024-10-01 06:23:37.968263] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:12.590 [2024-10-01 06:23:37.968277] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:12.590 [2024-10-01 06:23:37.968293] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:12.590 [2024-10-01 06:23:37.968307] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:12.590 [2024-10-01 06:23:37.968327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:12.590 [2024-10-01 06:23:37.968351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:12.590 [2024-10-01 06:23:37.968365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:12.590 [2024-10-01 06:23:37.968378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:12.590 [2024-10-01 06:23:37.968392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.590 [2024-10-01 06:23:37.968406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:12.590 [2024-10-01 06:23:37.968421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:31:12.590 [2024-10-01 06:23:37.968435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:37.972352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.590 [2024-10-01 06:23:37.972526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:12.590 [2024-10-01 06:23:37.972589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.889 ms 00:31:12.590 [2024-10-01 06:23:37.972615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:37.972796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.590 [2024-10-01 06:23:37.973031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:12.590 [2024-10-01 06:23:37.973076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:31:12.590 [2024-10-01 06:23:37.973098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:37.982182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:37.982354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:12.590 [2024-10-01 06:23:37.982414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:37.982439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:37.982542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:37.982565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:12.590 [2024-10-01 06:23:37.982586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:37.982607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:37.982706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:37.982804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:12.590 [2024-10-01 06:23:37.982825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:37.982878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:37.982911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:37.982933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:12.590 [2024-10-01 06:23:37.982955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:37.983025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.002178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:38.002244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:12.590 [2024-10-01 06:23:38.002258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:38.002268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.017726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:38.017981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:12.590 [2024-10-01 06:23:38.018004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:38.018014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.018104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:38.018116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:12.590 [2024-10-01 06:23:38.018125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:38.018134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.018174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:38.018184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:12.590 [2024-10-01 06:23:38.018193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:38.018202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.018277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:38.018291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:12.590 [2024-10-01 06:23:38.018300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:38.018309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.018336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:38.018346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:12.590 [2024-10-01 06:23:38.018359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:38.018367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.018419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:38.018429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:12.590 [2024-10-01 06:23:38.018442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:38.018450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.018507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.590 [2024-10-01 06:23:38.018519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:12.590 [2024-10-01 06:23:38.018531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.590 [2024-10-01 06:23:38.018541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.590 [2024-10-01 06:23:38.018713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.506 ms, result 0 00:31:12.853 00:31:12.853 00:31:12.853 06:23:38 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:15.394 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:15.394 06:23:40 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:15.394 [2024-10-01 06:23:40.718516] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:31:15.394 [2024-10-01 06:23:40.718677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94623 ] 00:31:15.394 [2024-10-01 06:23:40.856153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.394 [2024-10-01 06:23:40.931934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.684 [2024-10-01 06:23:41.083895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.684 [2024-10-01 06:23:41.084009] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.684 [2024-10-01 06:23:41.248218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.248313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:15.684 [2024-10-01 06:23:41.248338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:15.684 [2024-10-01 06:23:41.248352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.248433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.248444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:15.684 [2024-10-01 06:23:41.248455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:15.684 [2024-10-01 06:23:41.248471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.248500] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:15.684 [2024-10-01 06:23:41.248820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:15.684 [2024-10-01 06:23:41.248873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.248883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:15.684 [2024-10-01 06:23:41.248897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:31:15.684 [2024-10-01 06:23:41.248906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.249343] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:15.684 [2024-10-01 06:23:41.249380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.249391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:15.684 [2024-10-01 06:23:41.249402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:15.684 [2024-10-01 06:23:41.249410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.249474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.249489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:15.684 [2024-10-01 06:23:41.249502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:15.684 [2024-10-01 06:23:41.249511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.249780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.249793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:15.684 [2024-10-01 06:23:41.249803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:31:15.684 [2024-10-01 06:23:41.249811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.249919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.249934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:15.684 [2024-10-01 06:23:41.249944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:31:15.684 [2024-10-01 06:23:41.249957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.249984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.249993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:15.684 [2024-10-01 06:23:41.250004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:15.684 [2024-10-01 06:23:41.250012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.250042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:15.684 [2024-10-01 06:23:41.252790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.252865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:15.684 [2024-10-01 06:23:41.252882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:31:15.684 [2024-10-01 06:23:41.252892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.252934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.252945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:15.684 [2024-10-01 06:23:41.252955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:15.684 [2024-10-01 06:23:41.252965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.253020] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:15.684 [2024-10-01 06:23:41.253067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:15.684 [2024-10-01 06:23:41.253118] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:15.684 [2024-10-01 06:23:41.253140] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:15.684 [2024-10-01 06:23:41.253252] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:15.684 [2024-10-01 06:23:41.253265] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:15.684 [2024-10-01 06:23:41.253278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:15.684 [2024-10-01 06:23:41.253291] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:15.684 [2024-10-01 06:23:41.253301] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:15.684 [2024-10-01 06:23:41.253317] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:15.684 [2024-10-01 06:23:41.253335] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:15.684 [2024-10-01 06:23:41.253345] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:15.684 [2024-10-01 06:23:41.253353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:15.684 [2024-10-01 06:23:41.253362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.253376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:15.684 [2024-10-01 06:23:41.253386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:31:15.684 [2024-10-01 06:23:41.253395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.684 [2024-10-01 06:23:41.253484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.684 [2024-10-01 06:23:41.253494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:15.685 [2024-10-01 06:23:41.253503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:15.685 [2024-10-01 06:23:41.253514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.685 [2024-10-01 06:23:41.253620] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:15.685 [2024-10-01 06:23:41.253633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:15.685 [2024-10-01 06:23:41.253643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.685 [2024-10-01 06:23:41.253659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:15.685 [2024-10-01 06:23:41.253677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:15.685 [2024-10-01 06:23:41.253695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:15.685 [2024-10-01 06:23:41.253705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.685 [2024-10-01 06:23:41.253722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:15.685 [2024-10-01 06:23:41.253730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:15.685 [2024-10-01 06:23:41.253736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.685 [2024-10-01 06:23:41.253743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:15.685 [2024-10-01 06:23:41.253751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:15.685 [2024-10-01 06:23:41.253762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:15.685 [2024-10-01 06:23:41.253777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:15.685 [2024-10-01 06:23:41.253784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:15.685 [2024-10-01 06:23:41.253802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.685 [2024-10-01 06:23:41.253817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:15.685 [2024-10-01 06:23:41.253825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.685 [2024-10-01 06:23:41.253839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:15.685 [2024-10-01 06:23:41.253861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.685 [2024-10-01 06:23:41.253876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:15.685 [2024-10-01 06:23:41.253884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.685 [2024-10-01 06:23:41.253899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:15.685 [2024-10-01 06:23:41.253906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.685 [2024-10-01 06:23:41.253920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:15.685 [2024-10-01 06:23:41.253932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:15.685 [2024-10-01 06:23:41.253939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.685 [2024-10-01 06:23:41.253946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:15.685 [2024-10-01 06:23:41.253953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:15.685 [2024-10-01 06:23:41.253961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.685 [2024-10-01 06:23:41.253988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:15.685 [2024-10-01 06:23:41.253997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:15.685 [2024-10-01 06:23:41.254005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.685 [2024-10-01 06:23:41.254012] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:15.685 [2024-10-01 06:23:41.254021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:15.685 [2024-10-01 06:23:41.254030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.685 [2024-10-01 06:23:41.254044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.685 [2024-10-01 06:23:41.254055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:15.685 [2024-10-01 06:23:41.254063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:15.685 [2024-10-01 06:23:41.254071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:15.685 [2024-10-01 06:23:41.254078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:15.685 [2024-10-01 06:23:41.254089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:15.685 [2024-10-01 06:23:41.254096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:15.685 [2024-10-01 06:23:41.254105] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:15.685 [2024-10-01 06:23:41.254119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.685 [2024-10-01 06:23:41.254132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:15.685 [2024-10-01 06:23:41.254139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:15.685 [2024-10-01 06:23:41.254147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:15.685 [2024-10-01 06:23:41.254155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:15.685 [2024-10-01 06:23:41.254161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:15.685 [2024-10-01 06:23:41.254169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:15.685 [2024-10-01 06:23:41.254176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:15.685 [2024-10-01 06:23:41.254184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:15.685 [2024-10-01 06:23:41.254192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:15.685 [2024-10-01 06:23:41.254200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:15.685 [2024-10-01 06:23:41.254208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:15.685 [2024-10-01 06:23:41.254222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:15.685 [2024-10-01 06:23:41.254231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:15.685 [2024-10-01 06:23:41.254239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:15.685 [2024-10-01 06:23:41.254246] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:15.685 [2024-10-01 06:23:41.254255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.685 [2024-10-01 06:23:41.254263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:15.685 [2024-10-01 06:23:41.254271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:15.685 [2024-10-01 06:23:41.254278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:15.685 [2024-10-01 06:23:41.254286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:15.685 [2024-10-01 06:23:41.254293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.685 [2024-10-01 06:23:41.254301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:15.685 [2024-10-01 06:23:41.254309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:31:15.685 [2024-10-01 06:23:41.254316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.685 [2024-10-01 06:23:41.274259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.685 [2024-10-01 06:23:41.274329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:15.686 [2024-10-01 06:23:41.274357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.888 ms 00:31:15.686 [2024-10-01 06:23:41.274368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.686 [2024-10-01 06:23:41.274489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.686 [2024-10-01 06:23:41.274503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:15.686 [2024-10-01 06:23:41.274515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:15.686 [2024-10-01 06:23:41.274524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.686 [2024-10-01 06:23:41.290302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.686 [2024-10-01 06:23:41.290364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:15.686 [2024-10-01 06:23:41.290383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.694 ms 00:31:15.686 [2024-10-01 06:23:41.290398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.686 [2024-10-01 06:23:41.290447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.686 [2024-10-01 06:23:41.290458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:15.686 [2024-10-01 06:23:41.290472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:15.686 [2024-10-01 06:23:41.290481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.686 [2024-10-01 06:23:41.290595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.686 [2024-10-01 06:23:41.290611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:15.686 [2024-10-01 06:23:41.290624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:15.686 [2024-10-01 06:23:41.290635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.686 [2024-10-01 06:23:41.290782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.686 [2024-10-01 06:23:41.290793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:15.686 [2024-10-01 06:23:41.290803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:31:15.686 [2024-10-01 06:23:41.290819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.949 [2024-10-01 06:23:41.300438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.949 [2024-10-01 06:23:41.300499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:15.949 [2024-10-01 06:23:41.300512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.594 ms 00:31:15.949 [2024-10-01 06:23:41.300521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.949 [2024-10-01 06:23:41.300695] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:15.949 [2024-10-01 06:23:41.300710] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:15.949 [2024-10-01 06:23:41.300724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.949 [2024-10-01 06:23:41.300735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:15.949 [2024-10-01 06:23:41.300748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:31:15.949 [2024-10-01 06:23:41.300756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.949 [2024-10-01 06:23:41.313115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.949 [2024-10-01 06:23:41.313168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:15.949 [2024-10-01 06:23:41.313180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.339 ms 00:31:15.949 [2024-10-01 06:23:41.313193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.949 [2024-10-01 06:23:41.313338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.949 [2024-10-01 06:23:41.313349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:15.949 [2024-10-01 06:23:41.313359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:31:15.949 [2024-10-01 06:23:41.313367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.949 [2024-10-01 06:23:41.313425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.949 [2024-10-01 06:23:41.313441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:15.950 [2024-10-01 06:23:41.313451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:15.950 [2024-10-01 06:23:41.313463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.313812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.313823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:15.950 [2024-10-01 06:23:41.313838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:31:15.950 [2024-10-01 06:23:41.313890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.313916] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:15.950 [2024-10-01 06:23:41.313927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.313937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:15.950 [2024-10-01 06:23:41.313946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:15.950 [2024-10-01 06:23:41.313957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.324599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:15.950 [2024-10-01 06:23:41.324956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.324976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:15.950 [2024-10-01 06:23:41.324988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.978 ms 00:31:15.950 [2024-10-01 06:23:41.324997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.327456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.327497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:15.950 [2024-10-01 06:23:41.327509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:31:15.950 [2024-10-01 06:23:41.327519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.327637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.327650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:15.950 [2024-10-01 06:23:41.327661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:15.950 [2024-10-01 06:23:41.327670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.327706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.327720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:15.950 [2024-10-01 06:23:41.327730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:15.950 [2024-10-01 06:23:41.327738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.327782] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:15.950 [2024-10-01 06:23:41.327795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.327805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:15.950 [2024-10-01 06:23:41.327818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:15.950 [2024-10-01 06:23:41.327826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.335807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.336015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:15.950 [2024-10-01 06:23:41.336091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.797 ms 00:31:15.950 [2024-10-01 06:23:41.336116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.336319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.950 [2024-10-01 06:23:41.336390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:15.950 [2024-10-01 06:23:41.336413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:15.950 [2024-10-01 06:23:41.336438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.950 [2024-10-01 06:23:41.338109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 89.268 ms, result 0 00:33:00.696  Copying: 9096/1048576 [kB] (9096 kBps) Copying: 18200/1048576 [kB] (9104 kBps) Copying: 27244/1048576 [kB] (9044 kBps) Copying: 36392/1048576 [kB] (9148 kBps) Copying: 45160/1048576 [kB] (8768 kBps) Copying: 53780/1048576 [kB] (8620 kBps) Copying: 63076/1048576 [kB] (9296 kBps) Copying: 72860/1048576 [kB] (9784 kBps) Copying: 82024/1048576 [kB] (9164 kBps) Copying: 91156/1048576 [kB] (9132 kBps) Copying: 100900/1048576 [kB] (9744 kBps) Copying: 111092/1048576 [kB] (10192 kBps) Copying: 120916/1048576 [kB] (9824 kBps) Copying: 130388/1048576 [kB] (9472 kBps) Copying: 139848/1048576 [kB] (9460 kBps) Copying: 149388/1048576 [kB] (9540 kBps) Copying: 158428/1048576 [kB] (9040 kBps) Copying: 164/1024 [MB] (10 MBps) Copying: 178200/1048576 [kB] (9260 kBps) Copying: 186/1024 [MB] (12 MBps) Copying: 197/1024 [MB] (10 MBps) Copying: 210820/1048576 [kB] (9052 kBps) Copying: 216/1024 [MB] (10 MBps) Copying: 229928/1048576 [kB] (8696 kBps) Copying: 239476/1048576 [kB] (9548 kBps) Copying: 249052/1048576 [kB] (9576 kBps) Copying: 258288/1048576 [kB] (9236 kBps) Copying: 267796/1048576 [kB] (9508 kBps) Copying: 277324/1048576 [kB] (9528 kBps) Copying: 286840/1048576 [kB] (9516 kBps) Copying: 296608/1048576 [kB] (9768 kBps) Copying: 300/1024 [MB] (10 MBps) Copying: 310/1024 [MB] (10 MBps) Copying: 322/1024 [MB] (11 MBps) Copying: 340164/1048576 [kB] (10012 kBps) Copying: 350232/1048576 [kB] (10068 kBps) Copying: 360304/1048576 [kB] (10072 kBps) Copying: 369780/1048576 [kB] (9476 kBps) Copying: 371/1024 [MB] (10 MBps) Copying: 382/1024 [MB] (11 MBps) Copying: 396/1024 [MB] (13 MBps) Copying: 409/1024 [MB] (13 MBps) Copying: 421/1024 [MB] (11 MBps) Copying: 432/1024 [MB] (10 MBps) Copying: 443/1024 [MB] (10 MBps) Copying: 453/1024 [MB] (10 MBps) Copying: 464/1024 [MB] (10 MBps) Copying: 478/1024 [MB] (14 MBps) Copying: 489/1024 [MB] (10 MBps) Copying: 511216/1048576 [kB] (10084 kBps) Copying: 510/1024 [MB] (10 MBps) Copying: 520/1024 [MB] (10 MBps) Copying: 543040/1048576 [kB] (9672 kBps) Copying: 552288/1048576 [kB] (9248 kBps) Copying: 561824/1048576 [kB] (9536 kBps) Copying: 559/1024 [MB] (10 MBps) Copying: 582596/1048576 [kB] (10088 kBps) Copying: 592828/1048576 [kB] (10232 kBps) Copying: 591/1024 [MB] (12 MBps) Copying: 612088/1048576 [kB] (6112 kBps) Copying: 608/1024 [MB] (10 MBps) Copying: 619/1024 [MB] (10 MBps) Copying: 644008/1048576 [kB] (9988 kBps) Copying: 653768/1048576 [kB] (9760 kBps) Copying: 663008/1048576 [kB] (9240 kBps) Copying: 672208/1048576 [kB] (9200 kBps) Copying: 681672/1048576 [kB] (9464 kBps) Copying: 690488/1048576 [kB] (8816 kBps) Copying: 699092/1048576 [kB] (8604 kBps) Copying: 707968/1048576 [kB] (8876 kBps) Copying: 716792/1048576 [kB] (8824 kBps) Copying: 726880/1048576 [kB] (10088 kBps) Copying: 736344/1048576 [kB] (9464 kBps) Copying: 745568/1048576 [kB] (9224 kBps) Copying: 755048/1048576 [kB] (9480 kBps) Copying: 764584/1048576 [kB] (9536 kBps) Copying: 757/1024 [MB] (10 MBps) Copying: 784672/1048576 [kB] (9288 kBps) Copying: 794024/1048576 [kB] (9352 kBps) Copying: 804052/1048576 [kB] (10028 kBps) Copying: 813700/1048576 [kB] (9648 kBps) Copying: 823060/1048576 [kB] (9360 kBps) Copying: 832528/1048576 [kB] (9468 kBps) Copying: 841948/1048576 [kB] (9420 kBps) Copying: 851124/1048576 [kB] (9176 kBps) Copying: 846/1024 [MB] (15 MBps) Copying: 876032/1048576 [kB] (8936 kBps) Copying: 884600/1048576 [kB] (8568 kBps) Copying: 894424/1048576 [kB] (9824 kBps) Copying: 884/1024 [MB] (11 MBps) Copying: 914772/1048576 [kB] (8836 kBps) Copying: 924360/1048576 [kB] (9588 kBps) Copying: 933912/1048576 [kB] (9552 kBps) Copying: 922/1024 [MB] (10 MBps) Copying: 953780/1048576 [kB] (9096 kBps) Copying: 962960/1048576 [kB] (9180 kBps) Copying: 973056/1048576 [kB] (10096 kBps) Copying: 982336/1048576 [kB] (9280 kBps) Copying: 991200/1048576 [kB] (8864 kBps) Copying: 979/1024 [MB] (11 MBps) Copying: 994/1024 [MB] (14 MBps) Copying: 1007/1024 [MB] (13 MBps) Copying: 1041896/1048576 [kB] (9820 kBps) Copying: 1047936/1048576 [kB] (6040 kBps) Copying: 1024/1024 [MB] (average 10013 kBps)[2024-10-01 06:25:26.075512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.696 [2024-10-01 06:25:26.075584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:00.696 [2024-10-01 06:25:26.075600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:00.696 [2024-10-01 06:25:26.075608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.696 [2024-10-01 06:25:26.077068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:00.696 [2024-10-01 06:25:26.080020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.696 [2024-10-01 06:25:26.080057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:00.696 [2024-10-01 06:25:26.080070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:33:00.696 [2024-10-01 06:25:26.080086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.696 [2024-10-01 06:25:26.091301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.696 [2024-10-01 06:25:26.091361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:00.696 [2024-10-01 06:25:26.091379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.383 ms 00:33:00.696 [2024-10-01 06:25:26.091388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.696 [2024-10-01 06:25:26.091424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.696 [2024-10-01 06:25:26.091435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:00.696 [2024-10-01 06:25:26.091444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:00.696 [2024-10-01 06:25:26.091452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.696 [2024-10-01 06:25:26.091510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.696 [2024-10-01 06:25:26.091519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:00.696 [2024-10-01 06:25:26.091528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:00.696 [2024-10-01 06:25:26.091538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.696 [2024-10-01 06:25:26.091551] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:00.696 [2024-10-01 06:25:26.091562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126720 / 261120 wr_cnt: 1 state: open 00:33:00.696 [2024-10-01 06:25:26.091572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:00.696 [2024-10-01 06:25:26.091960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.091968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.091976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.091983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.091992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.091999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:00.697 [2024-10-01 06:25:26.092360] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:00.697 [2024-10-01 06:25:26.092367] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d290051c-b53f-492a-9b45-c0ff8fd3e166 00:33:00.697 [2024-10-01 06:25:26.092379] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126720 00:33:00.697 [2024-10-01 06:25:26.092386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126752 00:33:00.697 [2024-10-01 06:25:26.092394] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126720 00:33:00.697 [2024-10-01 06:25:26.092401] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:33:00.697 [2024-10-01 06:25:26.092408] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:00.697 [2024-10-01 06:25:26.092416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:00.697 [2024-10-01 06:25:26.092429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:00.697 [2024-10-01 06:25:26.092436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:00.697 [2024-10-01 06:25:26.092442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:00.697 [2024-10-01 06:25:26.092450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.697 [2024-10-01 06:25:26.092457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:00.697 [2024-10-01 06:25:26.092465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:33:00.697 [2024-10-01 06:25:26.092473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.094742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.697 [2024-10-01 06:25:26.094859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:00.697 [2024-10-01 06:25:26.094912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:33:00.697 [2024-10-01 06:25:26.094935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.095059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.697 [2024-10-01 06:25:26.095086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:00.697 [2024-10-01 06:25:26.095107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:33:00.697 [2024-10-01 06:25:26.095126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.100620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.697 [2024-10-01 06:25:26.100747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:00.697 [2024-10-01 06:25:26.100801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.697 [2024-10-01 06:25:26.100824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.100910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.697 [2024-10-01 06:25:26.100934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:00.697 [2024-10-01 06:25:26.100960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.697 [2024-10-01 06:25:26.100979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.101059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.697 [2024-10-01 06:25:26.101085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:00.697 [2024-10-01 06:25:26.101110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.697 [2024-10-01 06:25:26.101225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.101262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.697 [2024-10-01 06:25:26.101282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:00.697 [2024-10-01 06:25:26.101303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.697 [2024-10-01 06:25:26.101346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.113660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.697 [2024-10-01 06:25:26.113879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:00.697 [2024-10-01 06:25:26.113935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.697 [2024-10-01 06:25:26.113974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.124487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.697 [2024-10-01 06:25:26.124699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:00.697 [2024-10-01 06:25:26.124752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.697 [2024-10-01 06:25:26.124775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.697 [2024-10-01 06:25:26.124879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.698 [2024-10-01 06:25:26.124913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:00.698 [2024-10-01 06:25:26.124934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.698 [2024-10-01 06:25:26.124953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.698 [2024-10-01 06:25:26.125003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.698 [2024-10-01 06:25:26.125189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:00.698 [2024-10-01 06:25:26.125200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.698 [2024-10-01 06:25:26.125208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.698 [2024-10-01 06:25:26.125273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.698 [2024-10-01 06:25:26.125284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:00.698 [2024-10-01 06:25:26.125292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.698 [2024-10-01 06:25:26.125300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.698 [2024-10-01 06:25:26.125331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.698 [2024-10-01 06:25:26.125343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:00.698 [2024-10-01 06:25:26.125351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.698 [2024-10-01 06:25:26.125359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.698 [2024-10-01 06:25:26.125404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.698 [2024-10-01 06:25:26.125413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:00.698 [2024-10-01 06:25:26.125421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.698 [2024-10-01 06:25:26.125429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.698 [2024-10-01 06:25:26.125496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:00.698 [2024-10-01 06:25:26.125507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:00.698 [2024-10-01 06:25:26.125515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:00.698 [2024-10-01 06:25:26.125523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.698 [2024-10-01 06:25:26.125659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.360 ms, result 0 00:33:01.272 00:33:01.272 00:33:01.272 06:25:26 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:01.534 [2024-10-01 06:25:26.949321] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:33:01.534 [2024-10-01 06:25:26.949463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95687 ] 00:33:01.534 [2024-10-01 06:25:27.085923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.534 [2024-10-01 06:25:27.133488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.797 [2024-10-01 06:25:27.238771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:01.797 [2024-10-01 06:25:27.238873] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:01.797 [2024-10-01 06:25:27.401878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.797 [2024-10-01 06:25:27.402204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:01.797 [2024-10-01 06:25:27.402241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:33:01.797 [2024-10-01 06:25:27.402251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.797 [2024-10-01 06:25:27.402349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.797 [2024-10-01 06:25:27.402363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:01.797 [2024-10-01 06:25:27.402375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:01.797 [2024-10-01 06:25:27.402392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.797 [2024-10-01 06:25:27.402416] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:01.797 [2024-10-01 06:25:27.402700] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:01.797 [2024-10-01 06:25:27.402716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.797 [2024-10-01 06:25:27.402726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:01.797 [2024-10-01 06:25:27.402736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:33:01.797 [2024-10-01 06:25:27.402745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.797 [2024-10-01 06:25:27.403147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:01.797 [2024-10-01 06:25:27.403189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.797 [2024-10-01 06:25:27.403201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:01.797 [2024-10-01 06:25:27.403210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:01.797 [2024-10-01 06:25:27.403222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.797 [2024-10-01 06:25:27.403272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.797 [2024-10-01 06:25:27.403284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:01.797 [2024-10-01 06:25:27.403295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:01.797 [2024-10-01 06:25:27.403303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.797 [2024-10-01 06:25:27.403544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.797 [2024-10-01 06:25:27.403554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:01.798 [2024-10-01 06:25:27.403563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:33:01.798 [2024-10-01 06:25:27.403570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.798 [2024-10-01 06:25:27.403648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.798 [2024-10-01 06:25:27.403659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:01.798 [2024-10-01 06:25:27.403668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:01.798 [2024-10-01 06:25:27.403675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.798 [2024-10-01 06:25:27.403709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.798 [2024-10-01 06:25:27.403718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:01.798 [2024-10-01 06:25:27.403729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:01.798 [2024-10-01 06:25:27.403736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.798 [2024-10-01 06:25:27.403757] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:01.798 [2024-10-01 06:25:27.405613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.798 [2024-10-01 06:25:27.405740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:01.798 [2024-10-01 06:25:27.405755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 00:33:01.798 [2024-10-01 06:25:27.405763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.798 [2024-10-01 06:25:27.405796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.798 [2024-10-01 06:25:27.405804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:01.798 [2024-10-01 06:25:27.405813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:01.798 [2024-10-01 06:25:27.405820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.798 [2024-10-01 06:25:27.405888] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:01.798 [2024-10-01 06:25:27.405909] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:01.798 [2024-10-01 06:25:27.405952] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:01.798 [2024-10-01 06:25:27.405971] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:01.798 [2024-10-01 06:25:27.406075] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:01.798 [2024-10-01 06:25:27.406087] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:01.798 [2024-10-01 06:25:27.406097] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:01.798 [2024-10-01 06:25:27.406108] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406123] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406139] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:01.798 [2024-10-01 06:25:27.406158] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:01.798 [2024-10-01 06:25:27.406168] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:01.798 [2024-10-01 06:25:27.406175] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:01.798 [2024-10-01 06:25:27.406186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.798 [2024-10-01 06:25:27.406197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:01.798 [2024-10-01 06:25:27.406204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:33:01.798 [2024-10-01 06:25:27.406213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.798 [2024-10-01 06:25:27.406299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.798 [2024-10-01 06:25:27.406306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:01.798 [2024-10-01 06:25:27.406315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:33:01.798 [2024-10-01 06:25:27.406327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.798 [2024-10-01 06:25:27.406426] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:01.798 [2024-10-01 06:25:27.406437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:01.798 [2024-10-01 06:25:27.406446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:01.798 [2024-10-01 06:25:27.406471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:01.798 [2024-10-01 06:25:27.406495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:01.798 [2024-10-01 06:25:27.406511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:01.798 [2024-10-01 06:25:27.406520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:01.798 [2024-10-01 06:25:27.406528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:01.798 [2024-10-01 06:25:27.406535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:01.798 [2024-10-01 06:25:27.406543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:01.798 [2024-10-01 06:25:27.406550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:01.798 [2024-10-01 06:25:27.406565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:01.798 [2024-10-01 06:25:27.406587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:01.798 [2024-10-01 06:25:27.406613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:01.798 [2024-10-01 06:25:27.406635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:01.798 [2024-10-01 06:25:27.406660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:01.798 [2024-10-01 06:25:27.406683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:01.798 [2024-10-01 06:25:27.406698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:01.798 [2024-10-01 06:25:27.406705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:01.798 [2024-10-01 06:25:27.406713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:01.798 [2024-10-01 06:25:27.406720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:01.798 [2024-10-01 06:25:27.406728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:01.798 [2024-10-01 06:25:27.406735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:01.798 [2024-10-01 06:25:27.406751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:01.798 [2024-10-01 06:25:27.406758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406768] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:01.798 [2024-10-01 06:25:27.406777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:01.798 [2024-10-01 06:25:27.406785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.798 [2024-10-01 06:25:27.406802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:01.798 [2024-10-01 06:25:27.406810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:01.798 [2024-10-01 06:25:27.406817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:01.798 [2024-10-01 06:25:27.406826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:01.798 [2024-10-01 06:25:27.406838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:01.798 [2024-10-01 06:25:27.406869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:01.798 [2024-10-01 06:25:27.406883] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:01.798 [2024-10-01 06:25:27.406896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:01.798 [2024-10-01 06:25:27.406907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:01.798 [2024-10-01 06:25:27.406917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:01.798 [2024-10-01 06:25:27.406925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:01.798 [2024-10-01 06:25:27.406934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:01.798 [2024-10-01 06:25:27.406944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:01.798 [2024-10-01 06:25:27.406952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:01.799 [2024-10-01 06:25:27.406961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:01.799 [2024-10-01 06:25:27.406969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:01.799 [2024-10-01 06:25:27.406978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:01.799 [2024-10-01 06:25:27.406986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:01.799 [2024-10-01 06:25:27.406994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:01.799 [2024-10-01 06:25:27.407008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:01.799 [2024-10-01 06:25:27.407016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:01.799 [2024-10-01 06:25:27.407024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:01.799 [2024-10-01 06:25:27.407033] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:01.799 [2024-10-01 06:25:27.407045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:01.799 [2024-10-01 06:25:27.407054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:01.799 [2024-10-01 06:25:27.407063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:01.799 [2024-10-01 06:25:27.407071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:01.799 [2024-10-01 06:25:27.407079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:01.799 [2024-10-01 06:25:27.407090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.799 [2024-10-01 06:25:27.407099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:01.799 [2024-10-01 06:25:27.407107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:33:01.799 [2024-10-01 06:25:27.407115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.429968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.430181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:02.061 [2024-10-01 06:25:27.430706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.796 ms 00:33:02.061 [2024-10-01 06:25:27.430766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.430989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.431176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:02.061 [2024-10-01 06:25:27.431285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:33:02.061 [2024-10-01 06:25:27.431330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.442922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.443110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:02.061 [2024-10-01 06:25:27.443413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.401 ms 00:33:02.061 [2024-10-01 06:25:27.443457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.443523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.443778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:02.061 [2024-10-01 06:25:27.443879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:02.061 [2024-10-01 06:25:27.443968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.444381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.444594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:02.061 [2024-10-01 06:25:27.444742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:33:02.061 [2024-10-01 06:25:27.444836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.445540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.445739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:02.061 [2024-10-01 06:25:27.445923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:33:02.061 [2024-10-01 06:25:27.446033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.455626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.455912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:02.061 [2024-10-01 06:25:27.455953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.372 ms 00:33:02.061 [2024-10-01 06:25:27.455977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.456326] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:02.061 [2024-10-01 06:25:27.456385] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:02.061 [2024-10-01 06:25:27.456418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.456445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:02.061 [2024-10-01 06:25:27.456473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:33:02.061 [2024-10-01 06:25:27.456497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.469776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.469827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:02.061 [2024-10-01 06:25:27.469840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.236 ms 00:33:02.061 [2024-10-01 06:25:27.469863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.469982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.469991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:02.061 [2024-10-01 06:25:27.470000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:33:02.061 [2024-10-01 06:25:27.470008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.470059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.470071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:02.061 [2024-10-01 06:25:27.470079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:02.061 [2024-10-01 06:25:27.470089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.470400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.470434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:02.061 [2024-10-01 06:25:27.470442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:33:02.061 [2024-10-01 06:25:27.470454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.470469] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:02.061 [2024-10-01 06:25:27.470484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.470492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:02.061 [2024-10-01 06:25:27.470501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:02.061 [2024-10-01 06:25:27.470510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.479068] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:02.061 [2024-10-01 06:25:27.479279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.479294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:02.061 [2024-10-01 06:25:27.479303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.751 ms 00:33:02.061 [2024-10-01 06:25:27.479310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.481775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.481807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:02.061 [2024-10-01 06:25:27.481818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:33:02.061 [2024-10-01 06:25:27.481829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.481900] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:02.061 [2024-10-01 06:25:27.482456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.482502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:02.061 [2024-10-01 06:25:27.482517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:33:02.061 [2024-10-01 06:25:27.482525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.482574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.482584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:02.061 [2024-10-01 06:25:27.482595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:02.061 [2024-10-01 06:25:27.482602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.482637] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:02.061 [2024-10-01 06:25:27.482647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.482655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:02.061 [2024-10-01 06:25:27.482662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:02.061 [2024-10-01 06:25:27.482670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.487673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.487713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:02.061 [2024-10-01 06:25:27.487725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.986 ms 00:33:02.061 [2024-10-01 06:25:27.487734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.487814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:02.061 [2024-10-01 06:25:27.487824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:02.061 [2024-10-01 06:25:27.487834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:02.061 [2024-10-01 06:25:27.487863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:02.061 [2024-10-01 06:25:27.488884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 86.589 ms, result 0 00:34:21.216  Copying: 9720/1048576 [kB] (9720 kBps) Copying: 19228/1048576 [kB] (9508 kBps) Copying: 29096/1048576 [kB] (9868 kBps) Copying: 39216/1048576 [kB] (10120 kBps) Copying: 49112/1048576 [kB] (9896 kBps) Copying: 58728/1048576 [kB] (9616 kBps) Copying: 68180/1048576 [kB] (9452 kBps) Copying: 78248/1048576 [kB] (10068 kBps) Copying: 87648/1048576 [kB] (9400 kBps) Copying: 97068/1048576 [kB] (9420 kBps) Copying: 105940/1048576 [kB] (8872 kBps) Copying: 115428/1048576 [kB] (9488 kBps) Copying: 125492/1048576 [kB] (10064 kBps) Copying: 134/1024 [MB] (11 MBps) Copying: 144/1024 [MB] (10 MBps) Copying: 155/1024 [MB] (10 MBps) Copying: 165/1024 [MB] (10 MBps) Copying: 175/1024 [MB] (10 MBps) Copying: 186/1024 [MB] (10 MBps) Copying: 196/1024 [MB] (10 MBps) Copying: 206/1024 [MB] (10 MBps) Copying: 221376/1048576 [kB] (9652 kBps) Copying: 231156/1048576 [kB] (9780 kBps) Copying: 241036/1048576 [kB] (9880 kBps) Copying: 251004/1048576 [kB] (9968 kBps) Copying: 261072/1048576 [kB] (10068 kBps) Copying: 270772/1048576 [kB] (9700 kBps) Copying: 280228/1048576 [kB] (9456 kBps) Copying: 289312/1048576 [kB] (9084 kBps) Copying: 299444/1048576 [kB] (10132 kBps) Copying: 308272/1048576 [kB] (8828 kBps) Copying: 316924/1048576 [kB] (8652 kBps) Copying: 326516/1048576 [kB] (9592 kBps) Copying: 336232/1048576 [kB] (9716 kBps) Copying: 340/1024 [MB] (12 MBps) Copying: 353/1024 [MB] (12 MBps) Copying: 364/1024 [MB] (11 MBps) Copying: 375/1024 [MB] (10 MBps) Copying: 389/1024 [MB] (14 MBps) Copying: 409196/1048576 [kB] (10144 kBps) Copying: 418820/1048576 [kB] (9624 kBps) Copying: 428288/1048576 [kB] (9468 kBps) Copying: 438324/1048576 [kB] (10036 kBps) Copying: 438/1024 [MB] (10 MBps) Copying: 450/1024 [MB] (12 MBps) Copying: 463/1024 [MB] (13 MBps) Copying: 476/1024 [MB] (12 MBps) Copying: 488/1024 [MB] (12 MBps) Copying: 510812/1048576 [kB] (10220 kBps) Copying: 520736/1048576 [kB] (9924 kBps) Copying: 518/1024 [MB] (10 MBps) Copying: 529/1024 [MB] (10 MBps) Copying: 551972/1048576 [kB] (10128 kBps) Copying: 562000/1048576 [kB] (10028 kBps) Copying: 572184/1048576 [kB] (10184 kBps) Copying: 582152/1048576 [kB] (9968 kBps) Copying: 579/1024 [MB] (10 MBps) Copying: 589/1024 [MB] (10 MBps) Copying: 603/1024 [MB] (13 MBps) Copying: 627560/1048576 [kB] (9956 kBps) Copying: 625/1024 [MB] (12 MBps) Copying: 650752/1048576 [kB] (10024 kBps) Copying: 648/1024 [MB] (12 MBps) Copying: 672896/1048576 [kB] (9116 kBps) Copying: 682532/1048576 [kB] (9636 kBps) Copying: 692156/1048576 [kB] (9624 kBps) Copying: 686/1024 [MB] (10 MBps) Copying: 712952/1048576 [kB] (10192 kBps) Copying: 707/1024 [MB] (11 MBps) Copying: 719/1024 [MB] (11 MBps) Copying: 746140/1048576 [kB] (9800 kBps) Copying: 752/1024 [MB] (23 MBps) Copying: 798/1024 [MB] (46 MBps) Copying: 845/1024 [MB] (47 MBps) Copying: 857/1024 [MB] (11 MBps) Copying: 890/1024 [MB] (33 MBps) Copying: 937/1024 [MB] (47 MBps) Copying: 984/1024 [MB] (46 MBps) Copying: 1024/1024 [MB] (average 12 MBps)[2024-10-01 06:26:46.744234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.216 [2024-10-01 06:26:46.744520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:21.216 [2024-10-01 06:26:46.744548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:21.216 [2024-10-01 06:26:46.744561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.216 [2024-10-01 06:26:46.744599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:21.216 [2024-10-01 06:26:46.745252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.216 [2024-10-01 06:26:46.745277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:21.216 [2024-10-01 06:26:46.745291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:34:21.216 [2024-10-01 06:26:46.745302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.216 [2024-10-01 06:26:46.745602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.216 [2024-10-01 06:26:46.745621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:21.216 [2024-10-01 06:26:46.745633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:34:21.216 [2024-10-01 06:26:46.745645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.216 [2024-10-01 06:26:46.745690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.216 [2024-10-01 06:26:46.745703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:21.216 [2024-10-01 06:26:46.745714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:21.216 [2024-10-01 06:26:46.745726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.216 [2024-10-01 06:26:46.745798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.216 [2024-10-01 06:26:46.745811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:21.216 [2024-10-01 06:26:46.745823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:21.216 [2024-10-01 06:26:46.745836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.216 [2024-10-01 06:26:46.745869] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:21.216 [2024-10-01 06:26:46.745885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:21.216 [2024-10-01 06:26:46.745898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.745994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:21.216 [2024-10-01 06:26:46.746196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:21.217 [2024-10-01 06:26:46.746792] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:21.217 [2024-10-01 06:26:46.746807] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d290051c-b53f-492a-9b45-c0ff8fd3e166 00:34:21.217 [2024-10-01 06:26:46.746818] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:21.217 [2024-10-01 06:26:46.746828] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4384 00:34:21.217 [2024-10-01 06:26:46.746838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4352 00:34:21.217 [2024-10-01 06:26:46.746871] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:34:21.217 [2024-10-01 06:26:46.746882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:21.217 [2024-10-01 06:26:46.746893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:21.217 [2024-10-01 06:26:46.746906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:21.217 [2024-10-01 06:26:46.746915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:21.217 [2024-10-01 06:26:46.746924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:21.217 [2024-10-01 06:26:46.746935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.217 [2024-10-01 06:26:46.746946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:21.217 [2024-10-01 06:26:46.746956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:34:21.217 [2024-10-01 06:26:46.746967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.217 [2024-10-01 06:26:46.748900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.217 [2024-10-01 06:26:46.748931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:21.217 [2024-10-01 06:26:46.748944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:34:21.217 [2024-10-01 06:26:46.748959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.217 [2024-10-01 06:26:46.749102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.217 [2024-10-01 06:26:46.749115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:21.217 [2024-10-01 06:26:46.749124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:34:21.217 [2024-10-01 06:26:46.749136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.217 [2024-10-01 06:26:46.756133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.217 [2024-10-01 06:26:46.756275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:21.217 [2024-10-01 06:26:46.756297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.217 [2024-10-01 06:26:46.756305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.217 [2024-10-01 06:26:46.756374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.217 [2024-10-01 06:26:46.756384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:21.217 [2024-10-01 06:26:46.756396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.217 [2024-10-01 06:26:46.756404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.217 [2024-10-01 06:26:46.756464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.217 [2024-10-01 06:26:46.756474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:21.217 [2024-10-01 06:26:46.756482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.217 [2024-10-01 06:26:46.756492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.217 [2024-10-01 06:26:46.756512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.756520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:21.218 [2024-10-01 06:26:46.756528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.756535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.769973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.770018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:21.218 [2024-10-01 06:26:46.770036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.770044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.780047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.780213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:21.218 [2024-10-01 06:26:46.780231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.780240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.780306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.780316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:21.218 [2024-10-01 06:26:46.780324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.780332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.780375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.780384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:21.218 [2024-10-01 06:26:46.780392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.780399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.780471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.780486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:21.218 [2024-10-01 06:26:46.780495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.780502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.780533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.780543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:21.218 [2024-10-01 06:26:46.780550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.780558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.780599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.780608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:21.218 [2024-10-01 06:26:46.780616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.780623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.780669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.218 [2024-10-01 06:26:46.780678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:21.218 [2024-10-01 06:26:46.780687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.218 [2024-10-01 06:26:46.780694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.218 [2024-10-01 06:26:46.780818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 36.556 ms, result 0 00:34:21.475 00:34:21.476 00:34:21.476 06:26:47 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:24.003 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92926 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92926 ']' 00:34:24.003 Process with pid 92926 is not found 00:34:24.003 Remove shared memory files 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92926 00:34:24.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92926) - No such process 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 92926 is not found' 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_band_md /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_l2p_l1 /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_l2p_l2 /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_l2p_l2_ctx /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_nvc_md /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_p2l_pool /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_sb /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_sb_shm /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_trim_bitmap /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_trim_log /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_trim_md /dev/hugepages/ftl_d290051c-b53f-492a-9b45-c0ff8fd3e166_vmap 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:24.003 ************************************ 00:34:24.003 END TEST ftl_restore_fast 00:34:24.003 ************************************ 00:34:24.003 00:34:24.003 real 6m5.441s 00:34:24.003 user 5m51.573s 00:34:24.003 sys 0m13.613s 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:24.003 06:26:49 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:24.003 Process with pid 83791 is not found 00:34:24.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.003 06:26:49 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:24.003 06:26:49 ftl -- ftl/ftl.sh@14 -- # killprocess 83791 00:34:24.003 06:26:49 ftl -- common/autotest_common.sh@950 -- # '[' -z 83791 ']' 00:34:24.003 06:26:49 ftl -- common/autotest_common.sh@954 -- # kill -0 83791 00:34:24.003 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83791) - No such process 00:34:24.003 06:26:49 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 83791 is not found' 00:34:24.003 06:26:49 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:24.003 06:26:49 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96536 00:34:24.003 06:26:49 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96536 00:34:24.003 06:26:49 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:24.003 06:26:49 ftl -- common/autotest_common.sh@831 -- # '[' -z 96536 ']' 00:34:24.003 06:26:49 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.004 06:26:49 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:24.004 06:26:49 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.004 06:26:49 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:24.004 06:26:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:24.004 [2024-10-01 06:26:49.467822] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:34:24.004 [2024-10-01 06:26:49.468291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96536 ] 00:34:24.004 [2024-10-01 06:26:49.602670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.264 [2024-10-01 06:26:49.648675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.837 06:26:50 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:24.837 06:26:50 ftl -- common/autotest_common.sh@864 -- # return 0 00:34:24.837 06:26:50 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:25.098 nvme0n1 00:34:25.098 06:26:50 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:25.098 06:26:50 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:25.098 06:26:50 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:25.381 06:26:50 ftl -- ftl/common.sh@28 -- # stores=93750eec-b0c0-4122-bf96-beaf62fbbd4c 00:34:25.381 06:26:50 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:25.381 06:26:50 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 93750eec-b0c0-4122-bf96-beaf62fbbd4c 00:34:25.641 06:26:51 ftl -- ftl/ftl.sh@23 -- # killprocess 96536 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@950 -- # '[' -z 96536 ']' 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@954 -- # kill -0 96536 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@955 -- # uname 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96536 00:34:25.641 killing process with pid 96536 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96536' 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@969 -- # kill 96536 00:34:25.641 06:26:51 ftl -- common/autotest_common.sh@974 -- # wait 96536 00:34:25.899 06:26:51 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:26.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:26.157 Waiting for block devices as requested 00:34:26.157 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:26.416 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:26.416 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:26.416 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:31.675 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:31.675 Remove shared memory files 00:34:31.675 06:26:57 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:31.675 06:26:57 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:31.675 06:26:57 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:31.675 06:26:57 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:31.675 06:26:57 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:31.675 06:26:57 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:31.675 06:26:57 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:31.675 ************************************ 00:34:31.675 END TEST ftl 00:34:31.675 ************************************ 00:34:31.675 00:34:31.675 real 19m37.154s 00:34:31.675 user 21m32.870s 00:34:31.675 sys 1m26.958s 00:34:31.675 06:26:57 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:31.675 06:26:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:31.675 06:26:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:31.675 06:26:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:31.675 06:26:57 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:31.675 06:26:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:31.675 06:26:57 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:31.675 06:26:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:31.675 06:26:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:31.675 06:26:57 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:31.675 06:26:57 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:31.675 06:26:57 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:31.675 06:26:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:31.675 06:26:57 -- common/autotest_common.sh@10 -- # set +x 00:34:31.675 06:26:57 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:31.675 06:26:57 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:31.675 06:26:57 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:31.675 06:26:57 -- common/autotest_common.sh@10 -- # set +x 00:34:32.672 INFO: APP EXITING 00:34:32.672 INFO: killing all VMs 00:34:32.672 INFO: killing vhost app 00:34:32.672 INFO: EXIT DONE 00:34:32.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:33.189 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:33.189 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:33.448 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:33.448 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:33.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:33.963 Cleaning 00:34:33.963 Removing: /var/run/dpdk/spdk0/config 00:34:33.963 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:33.963 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:33.963 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:33.963 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:33.963 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:33.963 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:33.963 Removing: /var/run/dpdk/spdk0 00:34:33.963 Removing: /var/run/dpdk/spdk_pid69283 00:34:33.963 Removing: /var/run/dpdk/spdk_pid69441 00:34:33.963 Removing: /var/run/dpdk/spdk_pid69643 00:34:33.963 Removing: /var/run/dpdk/spdk_pid69730 00:34:33.963 Removing: /var/run/dpdk/spdk_pid69753 00:34:33.963 Removing: /var/run/dpdk/spdk_pid69865 00:34:33.963 Removing: /var/run/dpdk/spdk_pid69883 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70065 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70146 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70227 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70321 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70402 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70441 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70472 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70543 00:34:33.963 Removing: /var/run/dpdk/spdk_pid70643 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71063 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71110 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71157 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71173 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71231 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71247 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71305 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71321 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71363 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71381 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71423 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71441 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71568 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71610 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71688 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71849 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71922 00:34:33.963 Removing: /var/run/dpdk/spdk_pid71953 00:34:33.963 Removing: /var/run/dpdk/spdk_pid72362 00:34:33.963 Removing: /var/run/dpdk/spdk_pid72449 00:34:33.963 Removing: /var/run/dpdk/spdk_pid72555 00:34:33.963 Removing: /var/run/dpdk/spdk_pid72597 00:34:33.963 Removing: /var/run/dpdk/spdk_pid72617 00:34:33.963 Removing: /var/run/dpdk/spdk_pid72701 00:34:33.963 Removing: /var/run/dpdk/spdk_pid73306 00:34:33.963 Removing: /var/run/dpdk/spdk_pid73339 00:34:33.963 Removing: /var/run/dpdk/spdk_pid73789 00:34:33.963 Removing: /var/run/dpdk/spdk_pid73876 00:34:33.963 Removing: /var/run/dpdk/spdk_pid73981 00:34:33.963 Removing: /var/run/dpdk/spdk_pid74019 00:34:33.963 Removing: /var/run/dpdk/spdk_pid74043 00:34:33.963 Removing: /var/run/dpdk/spdk_pid74063 00:34:33.963 Removing: /var/run/dpdk/spdk_pid75889 00:34:33.963 Removing: /var/run/dpdk/spdk_pid76005 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76013 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76026 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76071 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76075 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76087 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76126 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76130 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76142 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76181 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76185 00:34:33.964 Removing: /var/run/dpdk/spdk_pid76197 00:34:33.964 Removing: /var/run/dpdk/spdk_pid77554 00:34:33.964 Removing: /var/run/dpdk/spdk_pid77640 00:34:33.964 Removing: /var/run/dpdk/spdk_pid79033 00:34:33.964 Removing: /var/run/dpdk/spdk_pid80390 00:34:33.964 Removing: /var/run/dpdk/spdk_pid80451 00:34:33.964 Removing: /var/run/dpdk/spdk_pid80516 00:34:33.964 Removing: /var/run/dpdk/spdk_pid80570 00:34:33.964 Removing: /var/run/dpdk/spdk_pid80647 00:34:33.964 Removing: /var/run/dpdk/spdk_pid80716 00:34:33.964 Removing: /var/run/dpdk/spdk_pid80852 00:34:33.964 Removing: /var/run/dpdk/spdk_pid81195 00:34:33.964 Removing: /var/run/dpdk/spdk_pid81226 00:34:33.964 Removing: /var/run/dpdk/spdk_pid81660 00:34:33.964 Removing: /var/run/dpdk/spdk_pid81844 00:34:33.964 Removing: /var/run/dpdk/spdk_pid81937 00:34:34.222 Removing: /var/run/dpdk/spdk_pid82047 00:34:34.222 Removing: /var/run/dpdk/spdk_pid82078 00:34:34.222 Removing: /var/run/dpdk/spdk_pid82109 00:34:34.222 Removing: /var/run/dpdk/spdk_pid82389 00:34:34.222 Removing: /var/run/dpdk/spdk_pid82431 00:34:34.222 Removing: /var/run/dpdk/spdk_pid82487 00:34:34.222 Removing: /var/run/dpdk/spdk_pid82859 00:34:34.222 Removing: /var/run/dpdk/spdk_pid82997 00:34:34.222 Removing: /var/run/dpdk/spdk_pid83791 00:34:34.222 Removing: /var/run/dpdk/spdk_pid83912 00:34:34.222 Removing: /var/run/dpdk/spdk_pid84062 00:34:34.222 Removing: /var/run/dpdk/spdk_pid84137 00:34:34.222 Removing: /var/run/dpdk/spdk_pid84413 00:34:34.222 Removing: /var/run/dpdk/spdk_pid84655 00:34:34.222 Removing: /var/run/dpdk/spdk_pid84974 00:34:34.222 Removing: /var/run/dpdk/spdk_pid85139 00:34:34.222 Removing: /var/run/dpdk/spdk_pid85218 00:34:34.222 Removing: /var/run/dpdk/spdk_pid85255 00:34:34.222 Removing: /var/run/dpdk/spdk_pid85338 00:34:34.222 Removing: /var/run/dpdk/spdk_pid85352 00:34:34.222 Removing: /var/run/dpdk/spdk_pid85388 00:34:34.222 Removing: /var/run/dpdk/spdk_pid85574 00:34:34.222 Removing: /var/run/dpdk/spdk_pid85802 00:34:34.222 Removing: /var/run/dpdk/spdk_pid86430 00:34:34.222 Removing: /var/run/dpdk/spdk_pid87157 00:34:34.222 Removing: /var/run/dpdk/spdk_pid87762 00:34:34.222 Removing: /var/run/dpdk/spdk_pid88605 00:34:34.222 Removing: /var/run/dpdk/spdk_pid88749 00:34:34.222 Removing: /var/run/dpdk/spdk_pid88826 00:34:34.222 Removing: /var/run/dpdk/spdk_pid89459 00:34:34.222 Removing: /var/run/dpdk/spdk_pid89513 00:34:34.222 Removing: /var/run/dpdk/spdk_pid90359 00:34:34.222 Removing: /var/run/dpdk/spdk_pid91047 00:34:34.222 Removing: /var/run/dpdk/spdk_pid91936 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92058 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92089 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92147 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92198 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92259 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92441 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92510 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92566 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92668 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92698 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92786 00:34:34.222 Removing: /var/run/dpdk/spdk_pid92926 00:34:34.222 Removing: /var/run/dpdk/spdk_pid93142 00:34:34.222 Removing: /var/run/dpdk/spdk_pid94145 00:34:34.222 Removing: /var/run/dpdk/spdk_pid94623 00:34:34.222 Removing: /var/run/dpdk/spdk_pid95687 00:34:34.222 Removing: /var/run/dpdk/spdk_pid96536 00:34:34.222 Clean 00:34:34.222 06:26:59 -- common/autotest_common.sh@1451 -- # return 0 00:34:34.222 06:26:59 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:34.222 06:26:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:34.222 06:26:59 -- common/autotest_common.sh@10 -- # set +x 00:34:34.222 06:26:59 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:34.222 06:26:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:34.222 06:26:59 -- common/autotest_common.sh@10 -- # set +x 00:34:34.222 06:26:59 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:34.222 06:26:59 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:34.222 06:26:59 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:34.222 06:26:59 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:34.222 06:26:59 -- spdk/autotest.sh@394 -- # hostname 00:34:34.222 06:26:59 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:34.480 geninfo: WARNING: invalid characters removed from testname! 00:35:01.011 06:27:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:01.011 06:27:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:03.562 06:27:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:05.458 06:27:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:07.986 06:27:33 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:10.638 06:27:35 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:12.613 06:27:38 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:12.613 06:27:38 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:35:12.613 06:27:38 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:35:12.613 06:27:38 -- common/autotest_common.sh@1681 -- $ lcov --version 00:35:12.873 06:27:38 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:35:12.873 06:27:38 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:35:12.873 06:27:38 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:35:12.873 06:27:38 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:35:12.873 06:27:38 -- scripts/common.sh@336 -- $ IFS=.-: 00:35:12.873 06:27:38 -- scripts/common.sh@336 -- $ read -ra ver1 00:35:12.873 06:27:38 -- scripts/common.sh@337 -- $ IFS=.-: 00:35:12.873 06:27:38 -- scripts/common.sh@337 -- $ read -ra ver2 00:35:12.873 06:27:38 -- scripts/common.sh@338 -- $ local 'op=<' 00:35:12.873 06:27:38 -- scripts/common.sh@340 -- $ ver1_l=2 00:35:12.873 06:27:38 -- scripts/common.sh@341 -- $ ver2_l=1 00:35:12.873 06:27:38 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:35:12.873 06:27:38 -- scripts/common.sh@344 -- $ case "$op" in 00:35:12.873 06:27:38 -- scripts/common.sh@345 -- $ : 1 00:35:12.873 06:27:38 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:35:12.873 06:27:38 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:12.873 06:27:38 -- scripts/common.sh@365 -- $ decimal 1 00:35:12.873 06:27:38 -- scripts/common.sh@353 -- $ local d=1 00:35:12.873 06:27:38 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:35:12.873 06:27:38 -- scripts/common.sh@355 -- $ echo 1 00:35:12.873 06:27:38 -- scripts/common.sh@365 -- $ ver1[v]=1 00:35:12.873 06:27:38 -- scripts/common.sh@366 -- $ decimal 2 00:35:12.873 06:27:38 -- scripts/common.sh@353 -- $ local d=2 00:35:12.873 06:27:38 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:35:12.873 06:27:38 -- scripts/common.sh@355 -- $ echo 2 00:35:12.873 06:27:38 -- scripts/common.sh@366 -- $ ver2[v]=2 00:35:12.873 06:27:38 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:35:12.873 06:27:38 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:35:12.873 06:27:38 -- scripts/common.sh@368 -- $ return 0 00:35:12.873 06:27:38 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:12.873 06:27:38 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:35:12.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.873 --rc genhtml_branch_coverage=1 00:35:12.873 --rc genhtml_function_coverage=1 00:35:12.873 --rc genhtml_legend=1 00:35:12.873 --rc geninfo_all_blocks=1 00:35:12.873 --rc geninfo_unexecuted_blocks=1 00:35:12.873 00:35:12.873 ' 00:35:12.873 06:27:38 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:35:12.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.873 --rc genhtml_branch_coverage=1 00:35:12.873 --rc genhtml_function_coverage=1 00:35:12.873 --rc genhtml_legend=1 00:35:12.873 --rc geninfo_all_blocks=1 00:35:12.873 --rc geninfo_unexecuted_blocks=1 00:35:12.873 00:35:12.873 ' 00:35:12.873 06:27:38 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:35:12.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.873 --rc genhtml_branch_coverage=1 00:35:12.873 --rc genhtml_function_coverage=1 00:35:12.873 --rc genhtml_legend=1 00:35:12.873 --rc geninfo_all_blocks=1 00:35:12.873 --rc geninfo_unexecuted_blocks=1 00:35:12.873 00:35:12.873 ' 00:35:12.873 06:27:38 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:35:12.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:12.873 --rc genhtml_branch_coverage=1 00:35:12.873 --rc genhtml_function_coverage=1 00:35:12.873 --rc genhtml_legend=1 00:35:12.873 --rc geninfo_all_blocks=1 00:35:12.873 --rc geninfo_unexecuted_blocks=1 00:35:12.873 00:35:12.873 ' 00:35:12.873 06:27:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:12.873 06:27:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:35:12.873 06:27:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:12.873 06:27:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.873 06:27:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.873 06:27:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.873 06:27:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.873 06:27:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.873 06:27:38 -- paths/export.sh@5 -- $ export PATH 00:35:12.873 06:27:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.873 06:27:38 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:12.873 06:27:38 -- common/autobuild_common.sh@479 -- $ date +%s 00:35:12.873 06:27:38 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727764058.XXXXXX 00:35:12.873 06:27:38 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727764058.9laEG0 00:35:12.873 06:27:38 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:35:12.873 06:27:38 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:35:12.873 06:27:38 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:35:12.873 06:27:38 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:35:12.873 06:27:38 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:12.873 06:27:38 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:12.873 06:27:38 -- common/autobuild_common.sh@495 -- $ get_config_params 00:35:12.873 06:27:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:35:12.873 06:27:38 -- common/autotest_common.sh@10 -- $ set +x 00:35:12.874 06:27:38 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:35:12.874 06:27:38 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:35:12.874 06:27:38 -- pm/common@17 -- $ local monitor 00:35:12.874 06:27:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:12.874 06:27:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:12.874 06:27:38 -- pm/common@25 -- $ sleep 1 00:35:12.874 06:27:38 -- pm/common@21 -- $ date +%s 00:35:12.874 06:27:38 -- pm/common@21 -- $ date +%s 00:35:12.874 06:27:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727764058 00:35:12.874 06:27:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727764058 00:35:12.874 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727764058_collect-cpu-load.pm.log 00:35:12.874 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727764058_collect-vmstat.pm.log 00:35:13.815 06:27:39 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:35:13.815 06:27:39 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:35:13.815 06:27:39 -- spdk/autopackage.sh@14 -- $ timing_finish 00:35:13.815 06:27:39 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:13.815 06:27:39 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:13.815 06:27:39 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:13.815 06:27:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:13.815 06:27:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:13.815 06:27:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:13.815 06:27:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:13.815 06:27:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:35:13.815 06:27:39 -- pm/common@44 -- $ pid=98205 00:35:13.815 06:27:39 -- pm/common@50 -- $ kill -TERM 98205 00:35:13.815 06:27:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:13.815 06:27:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:35:13.815 06:27:39 -- pm/common@44 -- $ pid=98206 00:35:13.815 06:27:39 -- pm/common@50 -- $ kill -TERM 98206 00:35:13.815 + [[ -n 5756 ]] 00:35:13.815 + sudo kill 5756 00:35:13.825 [Pipeline] } 00:35:13.842 [Pipeline] // timeout 00:35:13.847 [Pipeline] } 00:35:13.860 [Pipeline] // stage 00:35:13.865 [Pipeline] } 00:35:13.878 [Pipeline] // catchError 00:35:13.886 [Pipeline] stage 00:35:13.888 [Pipeline] { (Stop VM) 00:35:13.900 [Pipeline] sh 00:35:14.181 + vagrant halt 00:35:16.723 ==> default: Halting domain... 00:35:22.014 [Pipeline] sh 00:35:22.300 + vagrant destroy -f 00:35:24.845 ==> default: Removing domain... 00:35:25.431 [Pipeline] sh 00:35:25.713 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:25.772 [Pipeline] } 00:35:25.789 [Pipeline] // stage 00:35:25.796 [Pipeline] } 00:35:25.812 [Pipeline] // dir 00:35:25.816 [Pipeline] } 00:35:25.832 [Pipeline] // wrap 00:35:25.840 [Pipeline] } 00:35:25.855 [Pipeline] // catchError 00:35:25.867 [Pipeline] stage 00:35:25.870 [Pipeline] { (Epilogue) 00:35:25.884 [Pipeline] sh 00:35:26.167 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:32.765 [Pipeline] catchError 00:35:32.767 [Pipeline] { 00:35:32.780 [Pipeline] sh 00:35:33.067 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:33.637 Artifacts sizes are good 00:35:33.646 [Pipeline] } 00:35:33.659 [Pipeline] // catchError 00:35:33.667 [Pipeline] archiveArtifacts 00:35:33.674 Archiving artifacts 00:35:33.796 [Pipeline] cleanWs 00:35:33.809 [WS-CLEANUP] Deleting project workspace... 00:35:33.809 [WS-CLEANUP] Deferred wipeout is used... 00:35:33.817 [WS-CLEANUP] done 00:35:33.819 [Pipeline] } 00:35:33.834 [Pipeline] // stage 00:35:33.839 [Pipeline] } 00:35:33.855 [Pipeline] // node 00:35:33.861 [Pipeline] End of Pipeline 00:35:33.898 Finished: SUCCESS